Test Report: KVM_Linux_crio 21341

                    
                      890003c5847d742050af13aa4e3a32f9efad98ac:2025-09-04:41269
                    
                

Test fail (10/322)

x
+
TestAddons/parallel/Ingress (158.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-389176 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-389176 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-389176 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [abfe0e8a-d948-49f7-a8d4-d4af5a5f1495] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [abfe0e8a-d948-49f7-a8d4-d4af5a5f1495] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004925153s
I0903 22:31:49.329228  113288 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-389176 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.805907486s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-389176 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.230
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-389176 -n addons-389176
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 logs -n 25: (1.277279043s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-462504                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-462504 │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │ 03 Sep 25 22:27 UTC │
	│ start   │ --download-only -p binary-mirror-140139 --alsologtostderr --binary-mirror http://127.0.0.1:46845 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-140139 │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │                     │
	│ delete  │ -p binary-mirror-140139                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-140139 │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │ 03 Sep 25 22:27 UTC │
	│ addons  │ disable dashboard -p addons-389176                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │                     │
	│ addons  │ enable dashboard -p addons-389176                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │                     │
	│ start   │ -p addons-389176 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ enable headlamp -p addons-389176 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ ip      │ addons-389176 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ ssh     │ addons-389176 ssh cat /opt/local-path-provisioner/pvc-427058f3-6272-436c-9cfd-91031a1fcb72_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:32 UTC │
	│ addons  │ addons-389176 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ ssh     │ addons-389176 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-389176                                                                                                                                                                                                                                                                                                                                                                                         │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:31 UTC │
	│ addons  │ addons-389176 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:31 UTC │ 03 Sep 25 22:32 UTC │
	│ addons  │ addons-389176 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:32 UTC │ 03 Sep 25 22:32 UTC │
	│ addons  │ addons-389176 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:32 UTC │ 03 Sep 25 22:32 UTC │
	│ addons  │ addons-389176 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:32 UTC │ 03 Sep 25 22:32 UTC │
	│ ip      │ addons-389176 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-389176        │ jenkins │ v1.36.0 │ 03 Sep 25 22:34 UTC │ 03 Sep 25 22:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:27:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:27:34.138483  113968 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:27:34.138716  113968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:27:34.138726  113968 out.go:374] Setting ErrFile to fd 2...
	I0903 22:27:34.138731  113968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:27:34.138892  113968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:27:34.139480  113968 out.go:368] Setting JSON to false
	I0903 22:27:34.140409  113968 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4198,"bootTime":1756934256,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 22:27:34.140500  113968 start.go:140] virtualization: kvm guest
	I0903 22:27:34.142102  113968 out.go:179] * [addons-389176] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 22:27:34.143364  113968 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:27:34.143377  113968 notify.go:220] Checking for updates...
	I0903 22:27:34.145467  113968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:27:34.146548  113968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:27:34.147528  113968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:27:34.148482  113968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 22:27:34.149467  113968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:27:34.150700  113968 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:27:34.183373  113968 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 22:27:34.184376  113968 start.go:304] selected driver: kvm2
	I0903 22:27:34.184394  113968 start.go:918] validating driver "kvm2" against <nil>
	I0903 22:27:34.184426  113968 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 22:27:34.185492  113968 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:27:34.185606  113968 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 22:27:34.200218  113968 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 22:27:34.200279  113968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:27:34.200665  113968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 22:27:34.200716  113968 cni.go:84] Creating CNI manager for ""
	I0903 22:27:34.200789  113968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 22:27:34.200807  113968 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 22:27:34.200882  113968 start.go:348] cluster config:
	{Name:addons-389176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-389176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I0903 22:27:34.201051  113968 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:27:34.202602  113968 out.go:179] * Starting "addons-389176" primary control-plane node in "addons-389176" cluster
	I0903 22:27:34.203712  113968 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 22:27:34.203752  113968 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 22:27:34.203765  113968 cache.go:58] Caching tarball of preloaded images
	I0903 22:27:34.203867  113968 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 22:27:34.203884  113968 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 22:27:34.204343  113968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/config.json ...
	I0903 22:27:34.204375  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/config.json: {Name:mked9dd6d6ff3790929734decd5ec48068f4f6df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:27:34.204564  113968 start.go:360] acquireMachinesLock for addons-389176: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 22:27:34.204641  113968 start.go:364] duration metric: took 50.373µs to acquireMachinesLock for "addons-389176"
	I0903 22:27:34.204669  113968 start.go:93] Provisioning new machine with config: &{Name:addons-389176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:addons-389176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 22:27:34.204750  113968 start.go:125] createHost starting for "" (driver="kvm2")
	I0903 22:27:34.206317  113968 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0903 22:27:34.206494  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:27:34.206548  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:27:34.221465  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0903 22:27:34.222109  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:27:34.222687  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:27:34.222709  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:27:34.223046  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:27:34.223248  113968 main.go:141] libmachine: (addons-389176) Calling .GetMachineName
	I0903 22:27:34.223397  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:34.223560  113968 start.go:159] libmachine.API.Create for "addons-389176" (driver="kvm2")
	I0903 22:27:34.223588  113968 client.go:168] LocalClient.Create starting
	I0903 22:27:34.223623  113968 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem
	I0903 22:27:34.329739  113968 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem
	I0903 22:27:34.661435  113968 main.go:141] libmachine: Running pre-create checks...
	I0903 22:27:34.661495  113968 main.go:141] libmachine: (addons-389176) Calling .PreCreateCheck
	I0903 22:27:34.661912  113968 main.go:141] libmachine: (addons-389176) Calling .GetConfigRaw
	I0903 22:27:34.662412  113968 main.go:141] libmachine: Creating machine...
	I0903 22:27:34.662428  113968 main.go:141] libmachine: (addons-389176) Calling .Create
	I0903 22:27:34.662613  113968 main.go:141] libmachine: (addons-389176) creating KVM machine...
	I0903 22:27:34.662622  113968 main.go:141] libmachine: (addons-389176) creating network...
	I0903 22:27:34.663847  113968 main.go:141] libmachine: (addons-389176) DBG | found existing default KVM network
	I0903 22:27:34.664534  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:34.664397  113990 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208f10}
	I0903 22:27:34.664605  113968 main.go:141] libmachine: (addons-389176) DBG | created network xml: 
	I0903 22:27:34.664629  113968 main.go:141] libmachine: (addons-389176) DBG | <network>
	I0903 22:27:34.664641  113968 main.go:141] libmachine: (addons-389176) DBG |   <name>mk-addons-389176</name>
	I0903 22:27:34.664652  113968 main.go:141] libmachine: (addons-389176) DBG |   <dns enable='no'/>
	I0903 22:27:34.664661  113968 main.go:141] libmachine: (addons-389176) DBG |   
	I0903 22:27:34.664675  113968 main.go:141] libmachine: (addons-389176) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0903 22:27:34.664685  113968 main.go:141] libmachine: (addons-389176) DBG |     <dhcp>
	I0903 22:27:34.664694  113968 main.go:141] libmachine: (addons-389176) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0903 22:27:34.664723  113968 main.go:141] libmachine: (addons-389176) DBG |     </dhcp>
	I0903 22:27:34.664740  113968 main.go:141] libmachine: (addons-389176) DBG |   </ip>
	I0903 22:27:34.664747  113968 main.go:141] libmachine: (addons-389176) DBG |   
	I0903 22:27:34.664751  113968 main.go:141] libmachine: (addons-389176) DBG | </network>
	I0903 22:27:34.664762  113968 main.go:141] libmachine: (addons-389176) DBG | 
	I0903 22:27:34.669891  113968 main.go:141] libmachine: (addons-389176) DBG | trying to create private KVM network mk-addons-389176 192.168.39.0/24...
	I0903 22:27:34.734980  113968 main.go:141] libmachine: (addons-389176) DBG | private KVM network mk-addons-389176 192.168.39.0/24 created
	I0903 22:27:34.735029  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:34.734935  113990 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:27:34.735045  113968 main.go:141] libmachine: (addons-389176) setting up store path in /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176 ...
	I0903 22:27:34.735058  113968 main.go:141] libmachine: (addons-389176) building disk image from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 22:27:34.735222  113968 main.go:141] libmachine: (addons-389176) Downloading /home/jenkins/minikube-integration/21341-109162/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 22:27:35.027619  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:35.027480  113990 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa...
	I0903 22:27:35.246572  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:35.246441  113990 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/addons-389176.rawdisk...
	I0903 22:27:35.246597  113968 main.go:141] libmachine: (addons-389176) DBG | Writing magic tar header
	I0903 22:27:35.246606  113968 main.go:141] libmachine: (addons-389176) DBG | Writing SSH key tar header
	I0903 22:27:35.246614  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:35.246582  113990 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176 ...
	I0903 22:27:35.246731  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176 (perms=drwx------)
	I0903 22:27:35.246750  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines (perms=drwxr-xr-x)
	I0903 22:27:35.246762  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176
	I0903 22:27:35.246775  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines
	I0903 22:27:35.246783  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:27:35.246793  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162
	I0903 22:27:35.246801  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0903 22:27:35.246808  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home/jenkins
	I0903 22:27:35.246815  113968 main.go:141] libmachine: (addons-389176) DBG | checking permissions on dir: /home
	I0903 22:27:35.246825  113968 main.go:141] libmachine: (addons-389176) DBG | skipping /home - not owner
	I0903 22:27:35.246840  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube (perms=drwxr-xr-x)
	I0903 22:27:35.246856  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins/minikube-integration/21341-109162 (perms=drwxrwxr-x)
	I0903 22:27:35.246869  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0903 22:27:35.246888  113968 main.go:141] libmachine: (addons-389176) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0903 22:27:35.246897  113968 main.go:141] libmachine: (addons-389176) creating domain...
	I0903 22:27:35.247779  113968 main.go:141] libmachine: (addons-389176) define libvirt domain using xml: 
	I0903 22:27:35.247800  113968 main.go:141] libmachine: (addons-389176) <domain type='kvm'>
	I0903 22:27:35.247808  113968 main.go:141] libmachine: (addons-389176)   <name>addons-389176</name>
	I0903 22:27:35.247815  113968 main.go:141] libmachine: (addons-389176)   <memory unit='MiB'>4096</memory>
	I0903 22:27:35.247823  113968 main.go:141] libmachine: (addons-389176)   <vcpu>2</vcpu>
	I0903 22:27:35.247829  113968 main.go:141] libmachine: (addons-389176)   <features>
	I0903 22:27:35.247840  113968 main.go:141] libmachine: (addons-389176)     <acpi/>
	I0903 22:27:35.247846  113968 main.go:141] libmachine: (addons-389176)     <apic/>
	I0903 22:27:35.247857  113968 main.go:141] libmachine: (addons-389176)     <pae/>
	I0903 22:27:35.247871  113968 main.go:141] libmachine: (addons-389176)     
	I0903 22:27:35.247881  113968 main.go:141] libmachine: (addons-389176)   </features>
	I0903 22:27:35.247897  113968 main.go:141] libmachine: (addons-389176)   <cpu mode='host-passthrough'>
	I0903 22:27:35.247910  113968 main.go:141] libmachine: (addons-389176)   
	I0903 22:27:35.247919  113968 main.go:141] libmachine: (addons-389176)   </cpu>
	I0903 22:27:35.247928  113968 main.go:141] libmachine: (addons-389176)   <os>
	I0903 22:27:35.247934  113968 main.go:141] libmachine: (addons-389176)     <type>hvm</type>
	I0903 22:27:35.247943  113968 main.go:141] libmachine: (addons-389176)     <boot dev='cdrom'/>
	I0903 22:27:35.247952  113968 main.go:141] libmachine: (addons-389176)     <boot dev='hd'/>
	I0903 22:27:35.247964  113968 main.go:141] libmachine: (addons-389176)     <bootmenu enable='no'/>
	I0903 22:27:35.247976  113968 main.go:141] libmachine: (addons-389176)   </os>
	I0903 22:27:35.247986  113968 main.go:141] libmachine: (addons-389176)   <devices>
	I0903 22:27:35.247993  113968 main.go:141] libmachine: (addons-389176)     <disk type='file' device='cdrom'>
	I0903 22:27:35.248001  113968 main.go:141] libmachine: (addons-389176)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/boot2docker.iso'/>
	I0903 22:27:35.248008  113968 main.go:141] libmachine: (addons-389176)       <target dev='hdc' bus='scsi'/>
	I0903 22:27:35.248013  113968 main.go:141] libmachine: (addons-389176)       <readonly/>
	I0903 22:27:35.248019  113968 main.go:141] libmachine: (addons-389176)     </disk>
	I0903 22:27:35.248025  113968 main.go:141] libmachine: (addons-389176)     <disk type='file' device='disk'>
	I0903 22:27:35.248033  113968 main.go:141] libmachine: (addons-389176)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0903 22:27:35.248041  113968 main.go:141] libmachine: (addons-389176)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/addons-389176.rawdisk'/>
	I0903 22:27:35.248047  113968 main.go:141] libmachine: (addons-389176)       <target dev='hda' bus='virtio'/>
	I0903 22:27:35.248071  113968 main.go:141] libmachine: (addons-389176)     </disk>
	I0903 22:27:35.248092  113968 main.go:141] libmachine: (addons-389176)     <interface type='network'>
	I0903 22:27:35.248101  113968 main.go:141] libmachine: (addons-389176)       <source network='mk-addons-389176'/>
	I0903 22:27:35.248116  113968 main.go:141] libmachine: (addons-389176)       <model type='virtio'/>
	I0903 22:27:35.248146  113968 main.go:141] libmachine: (addons-389176)     </interface>
	I0903 22:27:35.248167  113968 main.go:141] libmachine: (addons-389176)     <interface type='network'>
	I0903 22:27:35.248178  113968 main.go:141] libmachine: (addons-389176)       <source network='default'/>
	I0903 22:27:35.248185  113968 main.go:141] libmachine: (addons-389176)       <model type='virtio'/>
	I0903 22:27:35.248191  113968 main.go:141] libmachine: (addons-389176)     </interface>
	I0903 22:27:35.248197  113968 main.go:141] libmachine: (addons-389176)     <serial type='pty'>
	I0903 22:27:35.248203  113968 main.go:141] libmachine: (addons-389176)       <target port='0'/>
	I0903 22:27:35.248209  113968 main.go:141] libmachine: (addons-389176)     </serial>
	I0903 22:27:35.248214  113968 main.go:141] libmachine: (addons-389176)     <console type='pty'>
	I0903 22:27:35.248221  113968 main.go:141] libmachine: (addons-389176)       <target type='serial' port='0'/>
	I0903 22:27:35.248226  113968 main.go:141] libmachine: (addons-389176)     </console>
	I0903 22:27:35.248232  113968 main.go:141] libmachine: (addons-389176)     <rng model='virtio'>
	I0903 22:27:35.248246  113968 main.go:141] libmachine: (addons-389176)       <backend model='random'>/dev/random</backend>
	I0903 22:27:35.248263  113968 main.go:141] libmachine: (addons-389176)     </rng>
	I0903 22:27:35.248270  113968 main.go:141] libmachine: (addons-389176)     
	I0903 22:27:35.248276  113968 main.go:141] libmachine: (addons-389176)     
	I0903 22:27:35.248283  113968 main.go:141] libmachine: (addons-389176)   </devices>
	I0903 22:27:35.248287  113968 main.go:141] libmachine: (addons-389176) </domain>
	I0903 22:27:35.248311  113968 main.go:141] libmachine: (addons-389176) 
	I0903 22:27:35.253677  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:dc:8f:65 in network default
	I0903 22:27:35.254273  113968 main.go:141] libmachine: (addons-389176) starting domain...
	I0903 22:27:35.254293  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:35.254301  113968 main.go:141] libmachine: (addons-389176) ensuring networks are active...
	I0903 22:27:35.254942  113968 main.go:141] libmachine: (addons-389176) Ensuring network default is active
	I0903 22:27:35.255238  113968 main.go:141] libmachine: (addons-389176) Ensuring network mk-addons-389176 is active
	I0903 22:27:35.256329  113968 main.go:141] libmachine: (addons-389176) getting domain XML...
	I0903 22:27:35.256991  113968 main.go:141] libmachine: (addons-389176) creating domain...
	I0903 22:27:36.608595  113968 main.go:141] libmachine: (addons-389176) waiting for IP...
	I0903 22:27:36.609221  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:36.609533  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:36.609611  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:36.609544  113990 retry.go:31] will retry after 246.251895ms: waiting for domain to come up
	I0903 22:27:36.856981  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:36.857436  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:36.857466  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:36.857405  113990 retry.go:31] will retry after 384.018482ms: waiting for domain to come up
	I0903 22:27:37.243061  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:37.243489  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:37.243523  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:37.243444  113990 retry.go:31] will retry after 442.523224ms: waiting for domain to come up
	I0903 22:27:37.688029  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:37.688532  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:37.688572  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:37.688479  113990 retry.go:31] will retry after 490.221477ms: waiting for domain to come up
	I0903 22:27:38.180053  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:38.180488  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:38.180546  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:38.180433  113990 retry.go:31] will retry after 511.053437ms: waiting for domain to come up
	I0903 22:27:38.693245  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:38.693652  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:38.693679  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:38.693634  113990 retry.go:31] will retry after 845.324556ms: waiting for domain to come up
	I0903 22:27:39.540094  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:39.540612  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:39.540637  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:39.540560  113990 retry.go:31] will retry after 724.962246ms: waiting for domain to come up
	I0903 22:27:40.267442  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:40.267872  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:40.267902  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:40.267836  113990 retry.go:31] will retry after 1.404804035s: waiting for domain to come up
	I0903 22:27:41.674368  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:41.674720  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:41.674754  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:41.674664  113990 retry.go:31] will retry after 1.367081789s: waiting for domain to come up
	I0903 22:27:43.043801  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:43.044161  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:43.044186  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:43.044140  113990 retry.go:31] will retry after 1.661296838s: waiting for domain to come up
	I0903 22:27:44.707608  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:44.708180  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:44.708206  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:44.708143  113990 retry.go:31] will retry after 1.77238447s: waiting for domain to come up
	I0903 22:27:46.482527  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:46.482942  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:46.482971  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:46.482891  113990 retry.go:31] will retry after 2.280277847s: waiting for domain to come up
	I0903 22:27:48.766383  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:48.766814  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:48.766865  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:48.766793  113990 retry.go:31] will retry after 4.163470815s: waiting for domain to come up
	I0903 22:27:52.934483  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:52.934999  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find current IP address of domain addons-389176 in network mk-addons-389176
	I0903 22:27:52.935039  113968 main.go:141] libmachine: (addons-389176) DBG | I0903 22:27:52.934972  113990 retry.go:31] will retry after 5.18426913s: waiting for domain to come up
	I0903 22:27:58.123328  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.123763  113968 main.go:141] libmachine: (addons-389176) found domain IP: 192.168.39.230
	I0903 22:27:58.123792  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has current primary IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.123810  113968 main.go:141] libmachine: (addons-389176) reserving static IP address...
	I0903 22:27:58.124173  113968 main.go:141] libmachine: (addons-389176) DBG | unable to find host DHCP lease matching {name: "addons-389176", mac: "52:54:00:ad:01:4e", ip: "192.168.39.230"} in network mk-addons-389176
	I0903 22:27:58.194714  113968 main.go:141] libmachine: (addons-389176) reserved static IP address 192.168.39.230 for domain addons-389176
	I0903 22:27:58.194740  113968 main.go:141] libmachine: (addons-389176) waiting for SSH...
	I0903 22:27:58.194751  113968 main.go:141] libmachine: (addons-389176) DBG | Getting to WaitForSSH function...
	I0903 22:27:58.197467  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.197972  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.198006  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.198158  113968 main.go:141] libmachine: (addons-389176) DBG | Using SSH client type: external
	I0903 22:27:58.198182  113968 main.go:141] libmachine: (addons-389176) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa (-rw-------)
	I0903 22:27:58.198218  113968 main.go:141] libmachine: (addons-389176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 22:27:58.198235  113968 main.go:141] libmachine: (addons-389176) DBG | About to run SSH command:
	I0903 22:27:58.198249  113968 main.go:141] libmachine: (addons-389176) DBG | exit 0
	I0903 22:27:58.329461  113968 main.go:141] libmachine: (addons-389176) DBG | SSH cmd err, output: <nil>: 
	I0903 22:27:58.329726  113968 main.go:141] libmachine: (addons-389176) KVM machine creation complete
	I0903 22:27:58.330131  113968 main.go:141] libmachine: (addons-389176) Calling .GetConfigRaw
	I0903 22:27:58.330674  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:58.330851  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:58.330994  113968 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0903 22:27:58.331011  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:27:58.332339  113968 main.go:141] libmachine: Detecting operating system of created instance...
	I0903 22:27:58.332356  113968 main.go:141] libmachine: Waiting for SSH to be available...
	I0903 22:27:58.332361  113968 main.go:141] libmachine: Getting to WaitForSSH function...
	I0903 22:27:58.332367  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:58.334636  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.335023  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.335048  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.335150  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:58.335324  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.335445  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.335581  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:58.335761  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:58.336033  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:58.336047  113968 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0903 22:27:58.440696  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 22:27:58.440721  113968 main.go:141] libmachine: Detecting the provisioner...
	I0903 22:27:58.440729  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:58.444656  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.445063  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.445090  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.445286  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:58.445535  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.445742  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.445896  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:58.446056  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:58.446247  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:58.446258  113968 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0903 22:27:58.550494  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0903 22:27:58.550568  113968 main.go:141] libmachine: found compatible host: buildroot
	I0903 22:27:58.550576  113968 main.go:141] libmachine: Provisioning with buildroot...
	I0903 22:27:58.550585  113968 main.go:141] libmachine: (addons-389176) Calling .GetMachineName
	I0903 22:27:58.550878  113968 buildroot.go:166] provisioning hostname "addons-389176"
	I0903 22:27:58.550907  113968 main.go:141] libmachine: (addons-389176) Calling .GetMachineName
	I0903 22:27:58.551094  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:58.553631  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.553996  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.554025  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.554184  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:58.554382  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.554526  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.554661  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:58.554802  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:58.554995  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:58.555008  113968 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-389176 && echo "addons-389176" | sudo tee /etc/hostname
	I0903 22:27:58.673526  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-389176
	
	I0903 22:27:58.673559  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:58.675995  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.676247  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.676282  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.676448  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:58.676654  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.676818  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:58.676981  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:58.677145  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:58.677439  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:58.677466  113968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-389176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-389176/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-389176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 22:27:58.787104  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 22:27:58.787140  113968 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 22:27:58.787163  113968 buildroot.go:174] setting up certificates
	I0903 22:27:58.787174  113968 provision.go:84] configureAuth start
	I0903 22:27:58.787188  113968 main.go:141] libmachine: (addons-389176) Calling .GetMachineName
	I0903 22:27:58.787504  113968 main.go:141] libmachine: (addons-389176) Calling .GetIP
	I0903 22:27:58.790097  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.790499  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.790530  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.790658  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:58.793338  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.793661  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:58.793689  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:58.793816  113968 provision.go:143] copyHostCerts
	I0903 22:27:58.793882  113968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 22:27:58.793988  113968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 22:27:58.794056  113968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 22:27:58.794110  113968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.addons-389176 san=[127.0.0.1 192.168.39.230 addons-389176 localhost minikube]
	I0903 22:27:59.201988  113968 provision.go:177] copyRemoteCerts
	I0903 22:27:59.202048  113968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 22:27:59.202075  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.204677  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.205008  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.205041  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.205210  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.205451  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.205644  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.205796  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:27:59.291134  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0903 22:27:59.319186  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 22:27:59.345830  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 22:27:59.371740  113968 provision.go:87] duration metric: took 584.551692ms to configureAuth
	I0903 22:27:59.371770  113968 buildroot.go:189] setting minikube options for container-runtime
	I0903 22:27:59.371952  113968 config.go:182] Loaded profile config "addons-389176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:27:59.372040  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.374719  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.375029  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.375062  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.375190  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.375414  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.375579  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.375718  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.375855  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:59.376042  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:59.376056  113968 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 22:27:59.602609  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 22:27:59.602638  113968 main.go:141] libmachine: Checking connection to Docker...
	I0903 22:27:59.602646  113968 main.go:141] libmachine: (addons-389176) Calling .GetURL
	I0903 22:27:59.603892  113968 main.go:141] libmachine: (addons-389176) DBG | using libvirt version 6000000
	I0903 22:27:59.606088  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.606393  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.606429  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.606557  113968 main.go:141] libmachine: Docker is up and running!
	I0903 22:27:59.606584  113968 main.go:141] libmachine: Reticulating splines...
	I0903 22:27:59.606593  113968 client.go:171] duration metric: took 25.382996409s to LocalClient.Create
	I0903 22:27:59.606626  113968 start.go:167] duration metric: took 25.383069312s to libmachine.API.Create "addons-389176"
	I0903 22:27:59.606637  113968 start.go:293] postStartSetup for "addons-389176" (driver="kvm2")
	I0903 22:27:59.606647  113968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 22:27:59.606664  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:59.606905  113968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 22:27:59.606930  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.608799  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.609077  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.609110  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.609233  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.609407  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.609569  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.609712  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:27:59.692922  113968 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 22:27:59.697695  113968 info.go:137] Remote host: Buildroot 2025.02
	I0903 22:27:59.697718  113968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 22:27:59.697788  113968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 22:27:59.697810  113968 start.go:296] duration metric: took 91.168269ms for postStartSetup
	I0903 22:27:59.697842  113968 main.go:141] libmachine: (addons-389176) Calling .GetConfigRaw
	I0903 22:27:59.698522  113968 main.go:141] libmachine: (addons-389176) Calling .GetIP
	I0903 22:27:59.700754  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.701058  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.701085  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.701282  113968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/config.json ...
	I0903 22:27:59.701482  113968 start.go:128] duration metric: took 25.49671437s to createHost
	I0903 22:27:59.701504  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.703757  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.704039  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.704067  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.704194  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.704348  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.704499  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.704633  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.704780  113968 main.go:141] libmachine: Using SSH client type: native
	I0903 22:27:59.705024  113968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0903 22:27:59.705038  113968 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 22:27:59.806624  113968 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756938479.783605528
	
	I0903 22:27:59.806650  113968 fix.go:216] guest clock: 1756938479.783605528
	I0903 22:27:59.806658  113968 fix.go:229] Guest: 2025-09-03 22:27:59.783605528 +0000 UTC Remote: 2025-09-03 22:27:59.70149375 +0000 UTC m=+25.599685949 (delta=82.111778ms)
	I0903 22:27:59.806707  113968 fix.go:200] guest clock delta is within tolerance: 82.111778ms
	I0903 22:27:59.806716  113968 start.go:83] releasing machines lock for "addons-389176", held for 25.602058652s
	I0903 22:27:59.806752  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:59.807032  113968 main.go:141] libmachine: (addons-389176) Calling .GetIP
	I0903 22:27:59.809453  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.809770  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.809806  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.809974  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:59.810496  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:59.810656  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:27:59.810761  113968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 22:27:59.810809  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.810906  113968 ssh_runner.go:195] Run: cat /version.json
	I0903 22:27:59.810932  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:27:59.813418  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.813626  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.813855  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.813893  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.813915  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.813919  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:27:59.813998  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:27:59.814094  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:27:59.814147  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.814245  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:27:59.814326  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.814328  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:27:59.814496  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:27:59.814496  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:27:59.898540  113968 ssh_runner.go:195] Run: systemctl --version
	I0903 22:27:59.937169  113968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 22:28:00.088893  113968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 22:28:00.095257  113968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 22:28:00.095332  113968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 22:28:00.114401  113968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 22:28:00.114433  113968 start.go:495] detecting cgroup driver to use...
	I0903 22:28:00.114503  113968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 22:28:00.132545  113968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 22:28:00.147445  113968 docker.go:218] disabling cri-docker service (if available) ...
	I0903 22:28:00.147517  113968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 22:28:00.162176  113968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 22:28:00.176846  113968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 22:28:00.314895  113968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 22:28:00.453027  113968 docker.go:234] disabling docker service ...
	I0903 22:28:00.453100  113968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 22:28:00.468424  113968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 22:28:00.481943  113968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 22:28:00.683603  113968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 22:28:00.813511  113968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 22:28:00.828159  113968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 22:28:00.848296  113968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 22:28:00.848361  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.859603  113968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 22:28:00.859676  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.870930  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.881984  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.893104  113968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 22:28:00.904859  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.915957  113968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.935821  113968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 22:28:00.947738  113968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 22:28:00.957531  113968 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 22:28:00.957612  113968 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 22:28:00.975852  113968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 22:28:00.987230  113968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:28:01.129346  113968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 22:28:01.243665  113968 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 22:28:01.243800  113968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 22:28:01.248697  113968 start.go:563] Will wait 60s for crictl version
	I0903 22:28:01.248783  113968 ssh_runner.go:195] Run: which crictl
	I0903 22:28:01.252304  113968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 22:28:01.287998  113968 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 22:28:01.288110  113968 ssh_runner.go:195] Run: crio --version
	I0903 22:28:01.314925  113968 ssh_runner.go:195] Run: crio --version
	I0903 22:28:01.343625  113968 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 22:28:01.344881  113968 main.go:141] libmachine: (addons-389176) Calling .GetIP
	I0903 22:28:01.347576  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:01.347975  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:01.348002  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:01.348231  113968 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 22:28:01.352480  113968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:28:01.366411  113968 kubeadm.go:875] updating cluster {Name:addons-389176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
0 ClusterName:addons-389176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 22:28:01.366561  113968 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 22:28:01.366634  113968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 22:28:01.398252  113968 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 22:28:01.398324  113968 ssh_runner.go:195] Run: which lz4
	I0903 22:28:01.402284  113968 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 22:28:01.406533  113968 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 22:28:01.406568  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 22:28:02.672133  113968 crio.go:462] duration metric: took 1.269874007s to copy over tarball
	I0903 22:28:02.672208  113968 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 22:28:04.239979  113968 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.567738571s)
	I0903 22:28:04.240009  113968 crio.go:469] duration metric: took 1.567847273s to extract the tarball
	I0903 22:28:04.240018  113968 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 22:28:04.279576  113968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 22:28:04.324595  113968 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 22:28:04.324622  113968 cache_images.go:85] Images are preloaded, skipping loading
	I0903 22:28:04.324629  113968 kubeadm.go:926] updating node { 192.168.39.230 8443 v1.34.0 crio true true} ...
	I0903 22:28:04.324724  113968 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-389176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-389176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 22:28:04.324792  113968 ssh_runner.go:195] Run: crio config
	I0903 22:28:04.367024  113968 cni.go:84] Creating CNI manager for ""
	I0903 22:28:04.367051  113968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 22:28:04.367062  113968 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 22:28:04.367083  113968 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-389176 NodeName:addons-389176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 22:28:04.367197  113968 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-389176"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.230"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 22:28:04.367257  113968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 22:28:04.378276  113968 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 22:28:04.378339  113968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 22:28:04.388816  113968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0903 22:28:04.406928  113968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 22:28:04.427285  113968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0903 22:28:04.448067  113968 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0903 22:28:04.452081  113968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 22:28:04.465841  113968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:28:04.617766  113968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 22:28:04.654769  113968 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176 for IP: 192.168.39.230
	I0903 22:28:04.654871  113968 certs.go:194] generating shared ca certs ...
	I0903 22:28:04.654910  113968 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.655104  113968 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 22:28:04.768260  113968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt ...
	I0903 22:28:04.768292  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt: {Name:mk5fa708bf8e5fb943e0dee59684fd8644c1b1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.768467  113968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key ...
	I0903 22:28:04.768478  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key: {Name:mka0b9f1969d0e5d816f67f442ed24d5b396f00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.768548  113968 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 22:28:04.831850  113968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt ...
	I0903 22:28:04.831880  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt: {Name:mkb3f33ceb157654e9a413698229f0f4815bd96e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.832040  113968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key ...
	I0903 22:28:04.832053  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key: {Name:mk0aa5b5fb2319bc9daaf59998d63dcb1e1c47d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.832116  113968 certs.go:256] generating profile certs ...
	I0903 22:28:04.832166  113968 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.key
	I0903 22:28:04.832187  113968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt with IP's: []
	I0903 22:28:04.938679  113968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt ...
	I0903 22:28:04.938717  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: {Name:mk30bc4a83a32693622424eb5d2e4ac5d63e0735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.938877  113968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.key ...
	I0903 22:28:04.938889  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.key: {Name:mkdebaeaa8c1a69c53293b3f1d9ee470bb9c44b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:04.938955  113968 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key.98ca1753
	I0903 22:28:04.938974  113968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt.98ca1753 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230]
	I0903 22:28:05.147792  113968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt.98ca1753 ...
	I0903 22:28:05.147829  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt.98ca1753: {Name:mk6af020f7c737df6ecb6637e9fda96f48fdf99d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:05.148127  113968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key.98ca1753 ...
	I0903 22:28:05.148151  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key.98ca1753: {Name:mk8d65b2d2c4d39e09fc975a1231e851fbe2835e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:05.148258  113968 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt.98ca1753 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt
	I0903 22:28:05.148335  113968 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key.98ca1753 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key
	I0903 22:28:05.148380  113968 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.key
	I0903 22:28:05.148398  113968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.crt with IP's: []
	I0903 22:28:05.386953  113968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.crt ...
	I0903 22:28:05.386985  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.crt: {Name:mkf2061c581e5072e90511168a8868e8ae5a7e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:05.387155  113968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.key ...
	I0903 22:28:05.387169  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.key: {Name:mke62563b517f145466af2f126478d55014df180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:05.387379  113968 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 22:28:05.387416  113968 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 22:28:05.387437  113968 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 22:28:05.387460  113968 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 22:28:05.388019  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 22:28:05.424291  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 22:28:05.455903  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 22:28:05.481987  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 22:28:05.508813  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0903 22:28:05.535295  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 22:28:05.560961  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 22:28:05.587055  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 22:28:05.613380  113968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 22:28:05.639657  113968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 22:28:05.657973  113968 ssh_runner.go:195] Run: openssl version
	I0903 22:28:05.663913  113968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 22:28:05.675708  113968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:28:05.680278  113968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:28:05.680345  113968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 22:28:05.686765  113968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 22:28:05.698753  113968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 22:28:05.702770  113968 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 22:28:05.702813  113968 kubeadm.go:392] StartCluster: {Name:addons-389176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 C
lusterName:addons-389176 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:28:05.702896  113968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 22:28:05.702941  113968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 22:28:05.739028  113968 cri.go:89] found id: ""
	I0903 22:28:05.739112  113968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 22:28:05.750783  113968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 22:28:05.761736  113968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 22:28:05.772237  113968 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 22:28:05.772257  113968 kubeadm.go:157] found existing configuration files:
	
	I0903 22:28:05.772310  113968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 22:28:05.781811  113968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 22:28:05.781864  113968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 22:28:05.791995  113968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 22:28:05.801240  113968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 22:28:05.801292  113968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 22:28:05.811605  113968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 22:28:05.821380  113968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 22:28:05.821452  113968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 22:28:05.831626  113968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 22:28:05.841130  113968 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 22:28:05.841184  113968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 22:28:05.851430  113968 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 22:28:05.986759  113968 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 22:28:18.304809  113968 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 22:28:18.304899  113968 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 22:28:18.305024  113968 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 22:28:18.305183  113968 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 22:28:18.305351  113968 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 22:28:18.305435  113968 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 22:28:18.306856  113968 out.go:252]   - Generating certificates and keys ...
	I0903 22:28:18.306923  113968 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 22:28:18.306977  113968 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 22:28:18.307040  113968 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 22:28:18.307092  113968 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 22:28:18.307158  113968 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 22:28:18.307207  113968 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 22:28:18.307256  113968 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 22:28:18.307350  113968 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-389176 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0903 22:28:18.307394  113968 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 22:28:18.307496  113968 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-389176 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0903 22:28:18.307554  113968 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 22:28:18.307611  113968 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 22:28:18.307678  113968 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 22:28:18.307770  113968 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 22:28:18.307849  113968 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 22:28:18.307928  113968 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 22:28:18.308018  113968 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 22:28:18.308105  113968 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 22:28:18.308190  113968 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 22:28:18.308305  113968 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 22:28:18.308380  113968 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 22:28:18.309415  113968 out.go:252]   - Booting up control plane ...
	I0903 22:28:18.309498  113968 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 22:28:18.309560  113968 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 22:28:18.309618  113968 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 22:28:18.309704  113968 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 22:28:18.309789  113968 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 22:28:18.309891  113968 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 22:28:18.309997  113968 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 22:28:18.310068  113968 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 22:28:18.310231  113968 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 22:28:18.310415  113968 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 22:28:18.310500  113968 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818489s
	I0903 22:28:18.310598  113968 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 22:28:18.310707  113968 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.230:8443/livez
	I0903 22:28:18.310815  113968 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 22:28:18.310921  113968 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 22:28:18.311029  113968 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.201166379s
	I0903 22:28:18.311102  113968 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.356633835s
	I0903 22:28:18.311157  113968 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001754552s
	I0903 22:28:18.311248  113968 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0903 22:28:18.311358  113968 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0903 22:28:18.311406  113968 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0903 22:28:18.311550  113968 kubeadm.go:310] [mark-control-plane] Marking the node addons-389176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0903 22:28:18.311602  113968 kubeadm.go:310] [bootstrap-token] Using token: k57mky.s2cryhl97wrhwy3u
	I0903 22:28:18.313258  113968 out.go:252]   - Configuring RBAC rules ...
	I0903 22:28:18.313338  113968 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0903 22:28:18.313437  113968 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0903 22:28:18.313586  113968 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0903 22:28:18.313751  113968 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0903 22:28:18.313874  113968 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0903 22:28:18.313948  113968 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0903 22:28:18.314053  113968 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0903 22:28:18.314097  113968 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0903 22:28:18.314134  113968 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0903 22:28:18.314140  113968 kubeadm.go:310] 
	I0903 22:28:18.314186  113968 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0903 22:28:18.314191  113968 kubeadm.go:310] 
	I0903 22:28:18.314251  113968 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0903 22:28:18.314256  113968 kubeadm.go:310] 
	I0903 22:28:18.314279  113968 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0903 22:28:18.314329  113968 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0903 22:28:18.314372  113968 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0903 22:28:18.314377  113968 kubeadm.go:310] 
	I0903 22:28:18.314427  113968 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0903 22:28:18.314433  113968 kubeadm.go:310] 
	I0903 22:28:18.314470  113968 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0903 22:28:18.314475  113968 kubeadm.go:310] 
	I0903 22:28:18.314515  113968 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0903 22:28:18.314582  113968 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0903 22:28:18.314675  113968 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0903 22:28:18.314685  113968 kubeadm.go:310] 
	I0903 22:28:18.314800  113968 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0903 22:28:18.314907  113968 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0903 22:28:18.314921  113968 kubeadm.go:310] 
	I0903 22:28:18.315043  113968 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k57mky.s2cryhl97wrhwy3u \
	I0903 22:28:18.315202  113968 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95ca73572d444299f99da04acbc8edf23d152075f1e7395b1d2227b91926b258 \
	I0903 22:28:18.315244  113968 kubeadm.go:310] 	--control-plane 
	I0903 22:28:18.315253  113968 kubeadm.go:310] 
	I0903 22:28:18.315380  113968 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0903 22:28:18.315388  113968 kubeadm.go:310] 
	I0903 22:28:18.315495  113968 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k57mky.s2cryhl97wrhwy3u \
	I0903 22:28:18.315652  113968 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:95ca73572d444299f99da04acbc8edf23d152075f1e7395b1d2227b91926b258 
	I0903 22:28:18.315666  113968 cni.go:84] Creating CNI manager for ""
	I0903 22:28:18.315673  113968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 22:28:18.317284  113968 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 22:28:18.318188  113968 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 22:28:18.332708  113968 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 22:28:18.354780  113968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 22:28:18.354879  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:18.354917  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-389176 minikube.k8s.io/updated_at=2025_09_03T22_28_18_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=addons-389176 minikube.k8s.io/primary=true
	I0903 22:28:18.494299  113968 ops.go:34] apiserver oom_adj: -16
	I0903 22:28:18.494413  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:18.995247  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:19.494783  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:19.994644  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:20.495133  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:20.994507  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:21.495475  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:21.994858  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:22.494838  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:22.995306  113968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 22:28:23.088427  113968 kubeadm.go:1105] duration metric: took 4.733616913s to wait for elevateKubeSystemPrivileges
	I0903 22:28:23.088478  113968 kubeadm.go:394] duration metric: took 17.385666421s to StartCluster
	I0903 22:28:23.088509  113968 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:23.088655  113968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:28:23.089214  113968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:28:23.089530  113968 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 22:28:23.089546  113968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0903 22:28:23.089559  113968 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0903 22:28:23.089705  113968 addons.go:69] Setting yakd=true in profile "addons-389176"
	I0903 22:28:23.089752  113968 addons.go:69] Setting cloud-spanner=true in profile "addons-389176"
	I0903 22:28:23.089767  113968 addons.go:69] Setting gcp-auth=true in profile "addons-389176"
	I0903 22:28:23.089735  113968 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-389176"
	I0903 22:28:23.089778  113968 addons.go:238] Setting addon cloud-spanner=true in "addons-389176"
	I0903 22:28:23.089777  113968 addons.go:69] Setting inspektor-gadget=true in profile "addons-389176"
	I0903 22:28:23.089795  113968 addons.go:238] Setting addon inspektor-gadget=true in "addons-389176"
	I0903 22:28:23.089807  113968 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-389176"
	I0903 22:28:23.089732  113968 addons.go:69] Setting ingress=true in profile "addons-389176"
	I0903 22:28:23.089820  113968 addons.go:69] Setting volcano=true in profile "addons-389176"
	I0903 22:28:23.089821  113968 addons.go:69] Setting registry-creds=true in profile "addons-389176"
	I0903 22:28:23.089828  113968 addons.go:238] Setting addon ingress=true in "addons-389176"
	I0903 22:28:23.089828  113968 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-389176"
	I0903 22:28:23.089832  113968 addons.go:238] Setting addon volcano=true in "addons-389176"
	I0903 22:28:23.089836  113968 addons.go:238] Setting addon registry-creds=true in "addons-389176"
	I0903 22:28:23.089844  113968 addons.go:69] Setting metrics-server=true in profile "addons-389176"
	I0903 22:28:23.089856  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089863  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089867  113968 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-389176"
	I0903 22:28:23.089867  113968 addons.go:69] Setting storage-provisioner=true in profile "addons-389176"
	I0903 22:28:23.089880  113968 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-389176"
	I0903 22:28:23.089882  113968 addons.go:238] Setting addon storage-provisioner=true in "addons-389176"
	I0903 22:28:23.089899  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089909  113968 addons.go:69] Setting volumesnapshots=true in profile "addons-389176"
	I0903 22:28:23.089920  113968 addons.go:238] Setting addon volumesnapshots=true in "addons-389176"
	I0903 22:28:23.089942  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089798  113968 mustload.go:65] Loading cluster: addons-389176
	I0903 22:28:23.089736  113968 addons.go:69] Setting registry=true in profile "addons-389176"
	I0903 22:28:23.090330  113968 addons.go:238] Setting addon registry=true in "addons-389176"
	I0903 22:28:23.090353  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090375  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.090392  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.089900  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.090426  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090464  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090510  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090518  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090529  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090533  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090537  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090562  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.089810  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.090798  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090832  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090849  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.090872  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090930  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.089769  113968 addons.go:69] Setting ingress-dns=true in profile "addons-389176"
	I0903 22:28:23.090950  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090958  113968 addons.go:238] Setting addon ingress-dns=true in "addons-389176"
	I0903 22:28:23.090990  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089757  113968 addons.go:69] Setting default-storageclass=true in profile "addons-389176"
	I0903 22:28:23.091057  113968 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-389176"
	I0903 22:28:23.089809  113968 config.go:182] Loaded profile config "addons-389176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:28:23.089859  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089858  113968 addons.go:238] Setting addon metrics-server=true in "addons-389176"
	I0903 22:28:23.091734  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.091962  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.091979  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.092066  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.092089  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.092237  113968 out.go:179] * Verifying Kubernetes components...
	I0903 22:28:23.089769  113968 addons.go:238] Setting addon yakd=true in "addons-389176"
	I0903 22:28:23.092442  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089745  113968 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-389176"
	I0903 22:28:23.092700  113968 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-389176"
	I0903 22:28:23.092729  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.093545  113968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 22:28:23.089835  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.089814  113968 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-389176"
	I0903 22:28:23.097697  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.098128  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.098158  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.090392  113968 config.go:182] Loaded profile config "addons-389176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:28:23.098788  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.098854  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.112228  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0903 22:28:23.112599  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0903 22:28:23.113076  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.113879  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.113901  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.114378  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.114993  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.115040  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.115278  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0903 22:28:23.115802  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.116232  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.116256  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.116705  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.116960  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.117974  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.118378  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0903 22:28:23.118633  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.118649  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.119273  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.119298  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.119899  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.119961  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.120241  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.120259  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.120635  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.125902  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.125952  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.126554  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.126596  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.127108  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.127130  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.127287  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.127328  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.127341  113968 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-389176"
	I0903 22:28:23.127617  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0903 22:28:23.127742  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.127762  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.127785  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.127792  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0903 22:28:23.127802  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0903 22:28:23.127765  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.128123  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.128499  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.128521  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.128716  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.128828  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.128912  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.129423  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.129442  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.129601  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.129617  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.129862  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.129885  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.129942  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.129999  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.130484  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.130533  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.139203  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.139652  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.139687  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.149576  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
	I0903 22:28:23.149599  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0903 22:28:23.149771  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0903 22:28:23.149824  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0903 22:28:23.149913  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.150416  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.150531  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.151048  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.151071  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.151172  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.151193  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.151569  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.151633  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.152261  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.152278  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.152289  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.152316  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.152394  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.166086  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0903 22:28:23.169355  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0903 22:28:23.169902  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.169937  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.170225  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.170260  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.170423  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.170437  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I0903 22:28:23.170729  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35815
	I0903 22:28:23.170745  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0903 22:28:23.170750  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.170766  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.170911  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.170925  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.170930  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0903 22:28:23.171035  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.171153  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.171181  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.171192  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.171219  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0903 22:28:23.171241  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.171298  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.171587  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.171619  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.171679  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.171679  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.171720  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.171793  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.171804  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.172708  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.172837  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.172857  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.172881  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.172898  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.172932  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.173034  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.173050  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.173065  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.173090  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.173258  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.173273  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.173719  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.173758  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.173792  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.173823  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.174048  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.174295  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.174633  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.174704  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.176149  113968 addons.go:238] Setting addon default-storageclass=true in "addons-389176"
	I0903 22:28:23.176192  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:23.176514  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.176552  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.177552  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.177594  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.178219  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.178242  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.178652  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.178883  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.180477  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.181639  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.182304  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.182321  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.182738  113968 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 22:28:23.182744  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.183008  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.184017  113968 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:28:23.184035  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 22:28:23.184139  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.184771  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.187009  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0903 22:28:23.187236  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.187565  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.187645  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.187660  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.187831  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.188019  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.188165  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.188295  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.189102  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.189118  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.189518  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.190087  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.190139  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.193638  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:23.193656  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:23.196007  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0903 22:28:23.196694  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.197360  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.197380  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.197852  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.199612  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0903 22:28:23.201609  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:23.201622  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:23.201631  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:23.201638  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:23.201905  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:23.201929  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:23.201943  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 22:28:23.202040  113968 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0903 22:28:23.202135  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.202159  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.203794  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.204338  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.204365  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.204907  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.205159  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.208134  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0903 22:28:23.208947  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.209668  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.210209  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.210235  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.210847  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.211059  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.211702  113968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0903 22:28:23.212770  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.213967  113968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 22:28:23.214021  113968 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0903 22:28:23.215457  113968 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0903 22:28:23.215476  113968 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0903 22:28:23.215514  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.216577  113968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 22:28:23.217825  113968 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0903 22:28:23.217845  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0903 22:28:23.217866  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.218871  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.220179  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.220369  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.220484  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0903 22:28:23.220637  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.220859  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.220893  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.221077  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.221278  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.221302  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.221342  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.221780  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.221887  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0903 22:28:23.222204  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.222223  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.222282  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.222494  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.222654  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.222780  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.223584  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.224122  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.224138  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.224517  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.224687  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.225289  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.225532  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.226130  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0903 22:28:23.226644  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.227058  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.227085  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.227174  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.227561  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.227621  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.227932  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.229408  113968 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0903 22:28:23.229470  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.229414  113968 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0903 22:28:23.230601  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0903 22:28:23.230613  113968 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0903 22:28:23.230630  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0903 22:28:23.230654  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.231739  113968 out.go:179]   - Using image docker.io/registry:3.0.0
	I0903 22:28:23.231837  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0903 22:28:23.231857  113968 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0903 22:28:23.231880  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.232860  113968 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0903 22:28:23.232877  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0903 22:28:23.232896  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.234892  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0903 22:28:23.235425  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.235566  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.235733  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I0903 22:28:23.236144  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.236665  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.236688  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.236751  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.236835  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.236851  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.236907  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.236922  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.237236  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.237315  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.237373  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.237403  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.237874  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.237930  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.237982  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.238033  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.238221  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.238489  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.238509  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.238544  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.238733  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.238770  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.238786  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.238917  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.239212  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.239336  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.239457  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.239517  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.239608  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.239846  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.241095  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I0903 22:28:23.241201  113968 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0903 22:28:23.241665  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.242122  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.242141  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.242291  113968 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0903 22:28:23.242306  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0903 22:28:23.242324  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.242933  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.243396  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.243922  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.244133  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0903 22:28:23.244330  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0903 22:28:23.244650  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.245083  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.245104  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.245175  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.245473  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.245663  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.245679  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.245729  113968 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0903 22:28:23.245773  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.245750  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.246166  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.246219  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.246237  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.246263  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.246436  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.246591  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.246727  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.246744  113968 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 22:28:23.246768  113968 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 22:28:23.246796  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.247300  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.247298  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.248727  113968 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0903 22:28:23.249068  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.249078  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0903 22:28:23.249130  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0903 22:28:23.249678  113968 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0903 22:28:23.249697  113968 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0903 22:28:23.249711  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.249980  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.249990  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.249990  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.250432  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.250459  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.250582  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.250599  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.250600  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.250725  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.250758  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.250832  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.251193  113968 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0903 22:28:23.251250  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.251314  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.251349  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.251511  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.251600  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.252209  113968 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0903 22:28:23.252227  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0903 22:28:23.252238  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.252244  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.252657  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.253400  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.253421  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.253593  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.253758  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.253824  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.253879  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42993
	I0903 22:28:23.254140  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.254289  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.254318  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.254518  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0903 22:28:23.254630  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.254905  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.255114  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.255250  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.255731  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.255816  113968 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0903 22:28:23.255853  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.255870  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.256226  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.256253  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:23.256285  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:23.256516  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.256519  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.256866  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.256886  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.257006  113968 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0903 22:28:23.257102  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.257307  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.257497  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.257803  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.258004  113968 out.go:179]   - Using image docker.io/busybox:stable
	I0903 22:28:23.258034  113968 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0903 22:28:23.258122  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.258138  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0903 22:28:23.258158  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.259402  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0903 22:28:23.261366  113968 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0903 22:28:23.261420  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0903 22:28:23.261441  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.261904  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.262033  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0903 22:28:23.263190  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.263241  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.264674  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.264863  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.265030  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0903 22:28:23.266276  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.266443  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.266524  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.266976  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.267001  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.267299  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.267205  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.267350  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.267229  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.267608  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0903 22:28:23.267621  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.267611  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.267828  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.267847  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.268177  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.268264  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.268326  113968 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0903 22:28:23.269975  113968 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0903 22:28:23.269995  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0903 22:28:23.270013  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.270069  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0903 22:28:23.271090  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0903 22:28:23.272059  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0903 22:28:23.272641  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.273011  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.273038  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.273153  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.273335  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.273497  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.273650  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.274117  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0903 22:28:23.275102  113968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0903 22:28:23.276043  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0903 22:28:23.276061  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0903 22:28:23.276080  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.279256  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.279652  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.279672  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.279855  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.280021  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.280205  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.280346  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.282093  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I0903 22:28:23.282516  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:23.282984  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:23.283007  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:23.283337  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:23.283541  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:23.285018  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:23.285261  113968 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 22:28:23.285277  113968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 22:28:23.285295  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:23.287874  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.288221  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:23.288240  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:23.288395  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:23.288554  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:23.288697  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:23.288829  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:23.451603  113968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0903 22:28:23.503690  113968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45490->192.168.39.230:22: read: connection reset by peer
	I0903 22:28:23.503732  113968 retry.go:31] will retry after 132.01746ms: ssh: handshake failed: read tcp 192.168.39.1:45490->192.168.39.230:22: read: connection reset by peer
	I0903 22:28:23.536923  113968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 22:28:23.737154  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0903 22:28:23.965595  113968 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0903 22:28:23.965631  113968 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0903 22:28:23.969430  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0903 22:28:23.971081  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 22:28:23.985205  113968 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:23.985227  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0903 22:28:23.989074  113968 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0903 22:28:23.989093  113968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0903 22:28:24.034469  113968 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 22:28:24.034493  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0903 22:28:24.036240  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0903 22:28:24.085618  113968 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0903 22:28:24.085647  113968 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0903 22:28:24.087129  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0903 22:28:24.090672  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0903 22:28:24.127077  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0903 22:28:24.129854  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0903 22:28:24.237917  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 22:28:24.473807  113968 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0903 22:28:24.473840  113968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0903 22:28:24.478469  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:24.518912  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0903 22:28:24.518942  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0903 22:28:24.530726  113968 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0903 22:28:24.530753  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0903 22:28:24.583686  113968 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 22:28:24.583720  113968 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 22:28:24.704474  113968 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0903 22:28:24.704507  113968 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0903 22:28:25.066234  113968 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 22:28:25.066268  113968 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 22:28:25.091717  113968 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0903 22:28:25.091750  113968 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0903 22:28:25.128602  113968 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0903 22:28:25.128630  113968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0903 22:28:25.143064  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0903 22:28:25.143089  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0903 22:28:25.194318  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0903 22:28:25.378003  113968 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0903 22:28:25.378028  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0903 22:28:25.428202  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 22:28:25.482276  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0903 22:28:25.482319  113968 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0903 22:28:25.543756  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0903 22:28:25.543794  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0903 22:28:25.715200  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0903 22:28:25.780480  113968 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 22:28:25.780506  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0903 22:28:25.814623  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0903 22:28:25.814650  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0903 22:28:26.255363  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 22:28:26.392324  113968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.940668983s)
	I0903 22:28:26.392364  113968 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.855402709s)
	I0903 22:28:26.392373  113968 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0903 22:28:26.392435  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.655255188s)
	I0903 22:28:26.392478  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.423024744s)
	I0903 22:28:26.392477  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:26.392514  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:26.392536  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:26.392597  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:26.392836  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:26.392951  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:26.392964  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:26.392972  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:26.392904  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:26.393064  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:26.393085  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:26.393098  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:26.393111  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:26.393197  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:26.393209  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:26.393401  113968 node_ready.go:35] waiting up to 6m0s for node "addons-389176" to be "Ready" ...
	I0903 22:28:26.393497  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:26.393512  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:26.401754  113968 node_ready.go:49] node "addons-389176" is "Ready"
	I0903 22:28:26.401777  113968 node_ready.go:38] duration metric: took 8.347816ms for node "addons-389176" to be "Ready" ...
	I0903 22:28:26.401789  113968 api_server.go:52] waiting for apiserver process to appear ...
	I0903 22:28:26.401825  113968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 22:28:26.437818  113968 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0903 22:28:26.437842  113968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0903 22:28:26.905820  113968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-389176" context rescaled to 1 replicas
	I0903 22:28:26.912576  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0903 22:28:26.912608  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0903 22:28:27.194311  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0903 22:28:27.194348  113968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0903 22:28:27.402239  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0903 22:28:27.402275  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0903 22:28:27.517682  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0903 22:28:27.517717  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0903 22:28:27.911726  113968 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0903 22:28:27.911753  113968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0903 22:28:28.132520  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0903 22:28:28.880080  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.908960178s)
	I0903 22:28:28.880149  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:28.880148  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.843874603s)
	I0903 22:28:28.880161  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:28.880185  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:28.880195  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:28.880461  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:28.880467  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:28.880483  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:28.880498  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:28.880506  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:28.880513  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:28.880534  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:28.880544  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:28.880551  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:28.880748  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:28.880776  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:28.880782  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:28.880884  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:28.880911  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:30.708314  113968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0903 22:28:30.708366  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:30.711604  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:30.712192  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:30.712224  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:30.712414  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:30.712646  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:30.712834  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:30.713033  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:30.956654  113968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0903 22:28:31.016694  113968 addons.go:238] Setting addon gcp-auth=true in "addons-389176"
	I0903 22:28:31.016766  113968 host.go:66] Checking if "addons-389176" exists ...
	I0903 22:28:31.017213  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:31.017259  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:31.032747  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0903 22:28:31.033192  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:31.033677  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:31.033699  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:31.034039  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:31.034672  113968 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:28:31.034708  113968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:28:31.050740  113968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0903 22:28:31.051295  113968 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:28:31.051808  113968 main.go:141] libmachine: Using API Version  1
	I0903 22:28:31.051832  113968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:28:31.052242  113968 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:28:31.052446  113968 main.go:141] libmachine: (addons-389176) Calling .GetState
	I0903 22:28:31.053890  113968 main.go:141] libmachine: (addons-389176) Calling .DriverName
	I0903 22:28:31.054123  113968 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0903 22:28:31.054150  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHHostname
	I0903 22:28:31.057027  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:31.057461  113968 main.go:141] libmachine: (addons-389176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:01:4e", ip: ""} in network mk-addons-389176: {Iface:virbr1 ExpiryTime:2025-09-03 23:27:49 +0000 UTC Type:0 Mac:52:54:00:ad:01:4e Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-389176 Clientid:01:52:54:00:ad:01:4e}
	I0903 22:28:31.057495  113968 main.go:141] libmachine: (addons-389176) DBG | domain addons-389176 has defined IP address 192.168.39.230 and MAC address 52:54:00:ad:01:4e in network mk-addons-389176
	I0903 22:28:31.057668  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHPort
	I0903 22:28:31.057893  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHKeyPath
	I0903 22:28:31.058079  113968 main.go:141] libmachine: (addons-389176) Calling .GetSSHUsername
	I0903 22:28:31.058232  113968 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/addons-389176/id_rsa Username:docker}
	I0903 22:28:31.581061  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.493884912s)
	I0903 22:28:31.581123  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.490421514s)
	I0903 22:28:31.581129  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581176  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.454064429s)
	I0903 22:28:31.581190  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581200  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.451318374s)
	I0903 22:28:31.581209  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581221  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581238  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581243  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.343300751s)
	I0903 22:28:31.581164  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581267  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581274  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581302  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581367  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.102865899s)
	W0903 22:28:31.581415  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:31.581431  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.387030618s)
	I0903 22:28:31.581439  113968 retry.go:31] will retry after 203.420744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:31.581538  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.153297963s)
	I0903 22:28:31.581558  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581572  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581586  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581607  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581683  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.86645133s)
	I0903 22:28:31.581223  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581708  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.581718  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.581859  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.326467409s)
	W0903 22:28:31.581887  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0903 22:28:31.581902  113968 retry.go:31] will retry after 367.792885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0903 22:28:31.581932  113968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.180094235s)
	I0903 22:28:31.581946  113968 api_server.go:72] duration metric: took 8.492374123s to wait for apiserver process to appear ...
	I0903 22:28:31.581953  113968 api_server.go:88] waiting for apiserver healthz status ...
	I0903 22:28:31.581969  113968 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0903 22:28:31.583825  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.583860  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.583868  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.583876  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.583882  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.583956  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.583976  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.583983  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.583990  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.583996  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.584165  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584196  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584204  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584214  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.584221  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.584270  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584296  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584302  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584309  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.584315  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.584355  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584375  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584418  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584418  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584432  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584435  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584441  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.584449  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.584456  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584447  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584556  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584562  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584570  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.584578  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.584589  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584596  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584606  113968 addons.go:479] Verifying addon registry=true in "addons-389176"
	I0903 22:28:31.584629  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584641  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.584871  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584896  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.584908  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.585048  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.585084  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.585092  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.585100  113968 addons.go:479] Verifying addon metrics-server=true in "addons-389176"
	I0903 22:28:31.585225  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.585256  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.585267  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.585272  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.585284  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.585294  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.585302  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.585589  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.585622  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.585718  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.584442  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.586412  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.586432  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.585745  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.586516  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.586897  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.586920  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.586995  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:31.587014  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.587050  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.587059  113968 addons.go:479] Verifying addon ingress=true in "addons-389176"
	I0903 22:28:31.588568  113968 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-389176 service yakd-dashboard -n yakd-dashboard
	
	I0903 22:28:31.588605  113968 out.go:179] * Verifying registry addon...
	I0903 22:28:31.589553  113968 out.go:179] * Verifying ingress addon...
	I0903 22:28:31.591173  113968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0903 22:28:31.591865  113968 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0903 22:28:31.665374  113968 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0903 22:28:31.691179  113968 api_server.go:141] control plane version: v1.34.0
	I0903 22:28:31.691227  113968 api_server.go:131] duration metric: took 109.265617ms to wait for apiserver health ...
	I0903 22:28:31.691253  113968 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 22:28:31.701885  113968 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0903 22:28:31.701916  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:31.706786  113968 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0903 22:28:31.706810  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:31.778545  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.778568  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.778585  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:31.778604  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:31.778817  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.778833  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:31.778837  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:31.778846  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 22:28:31.778957  113968 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0903 22:28:31.785970  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:31.817486  113968 system_pods.go:59] 17 kube-system pods found
	I0903 22:28:31.817537  113968 system_pods.go:61] "amd-gpu-device-plugin-p5trw" [e7710252-13ae-493d-94b9-a0fc2013d283] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0903 22:28:31.817546  113968 system_pods.go:61] "coredns-66bc5c9577-fhjd7" [1d05fa59-7212-458e-b74b-c9f2803d2a69] Running
	I0903 22:28:31.817554  113968 system_pods.go:61] "coredns-66bc5c9577-xdbwc" [b6728420-bc99-481f-a127-e26a200ebe6f] Running
	I0903 22:28:31.817559  113968 system_pods.go:61] "etcd-addons-389176" [465a8b74-d804-4b01-9271-79a24cb8264b] Running
	I0903 22:28:31.817565  113968 system_pods.go:61] "kube-apiserver-addons-389176" [23e1569d-21dc-43aa-8dcd-e89bdb156aae] Running
	I0903 22:28:31.817571  113968 system_pods.go:61] "kube-controller-manager-addons-389176" [b5625f7e-4b6a-45ab-a8a3-853ffa438bf9] Running
	I0903 22:28:31.817582  113968 system_pods.go:61] "kube-ingress-dns-minikube" [bceeeeac-596d-4b1e-8ae1-ca4f3830e59c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 22:28:31.817590  113968 system_pods.go:61] "kube-proxy-6lnvs" [058b2a43-7f98-4c22-a7f3-6e6ce78ef135] Running
	I0903 22:28:31.817595  113968 system_pods.go:61] "kube-scheduler-addons-389176" [4e3a841b-3852-45b6-8f6f-ad8f554c196e] Running
	I0903 22:28:31.817605  113968 system_pods.go:61] "metrics-server-85b7d694d7-mz7wt" [1b3ad0b2-9c1c-48e1-b571-fc3871122514] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 22:28:31.817617  113968 system_pods.go:61] "nvidia-device-plugin-daemonset-x5kx7" [b7bd9f8c-fc5e-4bb5-91a5-c454cebcabc2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0903 22:28:31.817628  113968 system_pods.go:61] "registry-66898fdd98-rnmbr" [cbdd00d6-7b6a-49e1-a285-71f1c8a40580] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 22:28:31.817639  113968 system_pods.go:61] "registry-creds-764b6fb674-22jjq" [54b0d885-34a4-4166-922b-841ca18277d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 22:28:31.817650  113968 system_pods.go:61] "registry-proxy-7h9df" [0cf8a4d4-8129-4399-9aeb-6c79b6faba16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0903 22:28:31.817663  113968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6p428" [cdbb7a29-d7e5-40c4-ae41-3c40cab5203f] Pending
	I0903 22:28:31.817673  113968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6zlph" [720ba6d7-655e-43e5-9c4a-2649b166eeda] Pending
	I0903 22:28:31.817684  113968 system_pods.go:61] "storage-provisioner" [c8e54e9e-c1b3-454f-a505-17fb0f986291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 22:28:31.817695  113968 system_pods.go:74] duration metric: took 126.434499ms to wait for pod list to return data ...
	I0903 22:28:31.817710  113968 default_sa.go:34] waiting for default service account to be created ...
	I0903 22:28:31.856049  113968 default_sa.go:45] found service account: "default"
	I0903 22:28:31.856075  113968 default_sa.go:55] duration metric: took 38.355709ms for default service account to be created ...
	I0903 22:28:31.856085  113968 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 22:28:31.877901  113968 system_pods.go:86] 17 kube-system pods found
	I0903 22:28:31.877952  113968 system_pods.go:89] "amd-gpu-device-plugin-p5trw" [e7710252-13ae-493d-94b9-a0fc2013d283] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0903 22:28:31.877975  113968 system_pods.go:89] "coredns-66bc5c9577-fhjd7" [1d05fa59-7212-458e-b74b-c9f2803d2a69] Running
	I0903 22:28:31.877981  113968 system_pods.go:89] "coredns-66bc5c9577-xdbwc" [b6728420-bc99-481f-a127-e26a200ebe6f] Running
	I0903 22:28:31.877985  113968 system_pods.go:89] "etcd-addons-389176" [465a8b74-d804-4b01-9271-79a24cb8264b] Running
	I0903 22:28:31.877989  113968 system_pods.go:89] "kube-apiserver-addons-389176" [23e1569d-21dc-43aa-8dcd-e89bdb156aae] Running
	I0903 22:28:31.877994  113968 system_pods.go:89] "kube-controller-manager-addons-389176" [b5625f7e-4b6a-45ab-a8a3-853ffa438bf9] Running
	I0903 22:28:31.878006  113968 system_pods.go:89] "kube-ingress-dns-minikube" [bceeeeac-596d-4b1e-8ae1-ca4f3830e59c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 22:28:31.878015  113968 system_pods.go:89] "kube-proxy-6lnvs" [058b2a43-7f98-4c22-a7f3-6e6ce78ef135] Running
	I0903 22:28:31.878025  113968 system_pods.go:89] "kube-scheduler-addons-389176" [4e3a841b-3852-45b6-8f6f-ad8f554c196e] Running
	I0903 22:28:31.878034  113968 system_pods.go:89] "metrics-server-85b7d694d7-mz7wt" [1b3ad0b2-9c1c-48e1-b571-fc3871122514] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 22:28:31.878071  113968 system_pods.go:89] "nvidia-device-plugin-daemonset-x5kx7" [b7bd9f8c-fc5e-4bb5-91a5-c454cebcabc2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0903 22:28:31.878081  113968 system_pods.go:89] "registry-66898fdd98-rnmbr" [cbdd00d6-7b6a-49e1-a285-71f1c8a40580] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 22:28:31.878091  113968 system_pods.go:89] "registry-creds-764b6fb674-22jjq" [54b0d885-34a4-4166-922b-841ca18277d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 22:28:31.878102  113968 system_pods.go:89] "registry-proxy-7h9df" [0cf8a4d4-8129-4399-9aeb-6c79b6faba16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0903 22:28:31.878112  113968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6p428" [cdbb7a29-d7e5-40c4-ae41-3c40cab5203f] Pending
	I0903 22:28:31.878124  113968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6zlph" [720ba6d7-655e-43e5-9c4a-2649b166eeda] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 22:28:31.878134  113968 system_pods.go:89] "storage-provisioner" [c8e54e9e-c1b3-454f-a505-17fb0f986291] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 22:28:31.878148  113968 system_pods.go:126] duration metric: took 22.055742ms to wait for k8s-apps to be running ...
	I0903 22:28:31.878162  113968 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 22:28:31.878223  113968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 22:28:31.949839  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 22:28:32.106421  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:32.108193  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:32.620601  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:32.620607  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:32.673183  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.540612947s)
	I0903 22:28:32.673257  113968 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.619111941s)
	I0903 22:28:32.673258  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:32.673415  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:32.673827  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:32.673841  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:32.673849  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:32.673856  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:32.674111  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:32.674198  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:32.674214  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:32.674308  113968 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-389176"
	I0903 22:28:32.674678  113968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 22:28:32.675564  113968 out.go:179] * Verifying csi-hostpath-driver addon...
	I0903 22:28:32.676615  113968 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0903 22:28:32.677552  113968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0903 22:28:32.677719  113968 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0903 22:28:32.677735  113968 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0903 22:28:32.728488  113968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0903 22:28:32.728511  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:32.833789  113968 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0903 22:28:32.833818  113968 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0903 22:28:32.953249  113968 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0903 22:28:32.953272  113968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0903 22:28:33.056263  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0903 22:28:33.102986  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:33.105740  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:33.187272  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:33.597855  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:33.599889  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:33.683273  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:34.100099  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:34.100405  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:34.198471  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:34.629157  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:34.629216  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:34.711724  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:34.810021  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.024010738s)
	I0903 22:28:34.810069  113968 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.931817321s)
	W0903 22:28:34.810072  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:34.810100  113968 system_svc.go:56] duration metric: took 2.931933708s WaitForService to wait for kubelet
	I0903 22:28:34.810110  113968 retry.go:31] will retry after 431.927776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:34.810113  113968 kubeadm.go:578] duration metric: took 11.720539018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 22:28:34.810135  113968 node_conditions.go:102] verifying NodePressure condition ...
	I0903 22:28:34.810261  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.860366934s)
	I0903 22:28:34.810314  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:34.810330  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:34.810356  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.75405011s)
	I0903 22:28:34.810399  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:34.810414  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:34.810594  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:34.810630  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:34.810644  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:34.810652  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:34.810740  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:34.810755  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:34.810760  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:34.810765  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:28:34.810779  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:28:34.810833  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:28:34.810850  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:34.810855  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:34.811033  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:28:34.811047  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:28:34.813185  113968 addons.go:479] Verifying addon gcp-auth=true in "addons-389176"
	I0903 22:28:34.814814  113968 out.go:179] * Verifying gcp-auth addon...
	I0903 22:28:34.814813  113968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 22:28:34.814845  113968 node_conditions.go:123] node cpu capacity is 2
	I0903 22:28:34.814863  113968 node_conditions.go:105] duration metric: took 4.721099ms to run NodePressure ...
	I0903 22:28:34.814876  113968 start.go:241] waiting for startup goroutines ...
	I0903 22:28:34.816536  113968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0903 22:28:34.820058  113968 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0903 22:28:34.820078  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:35.096437  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:35.097150  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:35.183624  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:35.242626  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:35.325563  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:35.597076  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:35.598992  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:35.697200  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:35.825212  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:36.098744  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:36.100538  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:36.189912  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:36.322016  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:36.453818  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.211143404s)
	W0903 22:28:36.453878  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:36.453913  113968 retry.go:31] will retry after 764.191426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:36.597610  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:36.597610  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:36.683437  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:36.822835  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:37.101418  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:37.101838  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:37.184024  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:37.219071  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:37.322564  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:37.597860  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:37.601939  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:37.681802  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:37.824141  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:38.097992  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:38.099638  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:38.183090  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:38.320825  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:38.328578  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109463336s)
	W0903 22:28:38.328623  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:38.328644  113968 retry.go:31] will retry after 1.122031572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:38.600058  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:38.600711  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:38.681807  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:38.819891  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:39.099208  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:39.099724  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:39.182360  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:39.319422  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:39.451643  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:39.599823  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:39.601954  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:39.682627  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:39.820086  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:40.095151  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:40.098233  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:40.182472  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:40.323463  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 22:28:40.353637  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:40.353670  113968 retry.go:31] will retry after 1.815042208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:40.596026  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:40.596135  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:40.682956  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:40.819942  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:41.095523  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:41.098700  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:41.183548  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:41.320792  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:41.601892  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:41.609369  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:41.682242  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:41.823940  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:42.169750  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:42.484749  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:42.489961  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:42.490081  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:42.490185  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:42.603729  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:42.607081  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:42.680976  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:42.821697  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 22:28:43.088550  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:43.088582  113968 retry.go:31] will retry after 2.577368776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:43.097671  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:43.097821  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:43.182620  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:43.320270  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:43.599605  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:43.601102  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:43.685708  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:43.821195  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:44.096244  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:44.097463  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:44.181291  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:44.321562  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:45.040233  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:45.041909  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:45.042150  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:45.042302  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:45.095496  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:45.097557  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:45.187629  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:45.322096  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:45.596757  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:45.598610  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:45.666719  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:45.683328  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:45.825014  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:46.094332  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:46.095979  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:46.183840  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:46.321968  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 22:28:46.571406  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:46.571443  113968 retry.go:31] will retry after 1.487647317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:46.596273  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:46.596684  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:46.681887  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:46.820676  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:47.138435  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:47.138602  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:47.182376  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:47.321657  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:47.600134  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:47.600233  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:47.682275  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:47.820020  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:48.059235  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:48.095578  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:48.096399  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:48.183117  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:48.319779  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:48.595909  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:48.596585  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:48.698124  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0903 22:28:48.723937  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:48.723980  113968 retry.go:31] will retry after 4.57442689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:48.820815  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:49.099068  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:49.100285  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:49.183653  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:49.323737  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:49.596381  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:49.599228  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:49.682882  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:49.819867  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:50.095851  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:50.096515  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:50.182091  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:50.319833  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:50.597288  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:50.598025  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:50.698683  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:50.819405  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:51.094546  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:51.095858  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:51.185120  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:51.320107  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:51.594908  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:51.595637  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:51.683575  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:51.819467  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:52.095260  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:52.096543  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:52.183043  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:52.320181  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:52.595810  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:52.596620  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:52.681425  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:52.821926  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:53.096307  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:53.098085  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:53.182436  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:53.299322  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:53.321784  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:53.599215  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:53.599516  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:53.683899  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:53.819607  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:54.101338  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:54.103287  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:54.181160  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:54.320117  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:54.332406  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.033038776s)
	W0903 22:28:54.332447  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:54.332471  113968 retry.go:31] will retry after 5.378100101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:28:54.598728  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:54.599642  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:54.681952  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:54.819855  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:55.097662  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:55.099475  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:55.183378  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:55.321798  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:55.598842  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:55.599693  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:55.687129  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:55.820026  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:56.097723  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:56.098101  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:56.182288  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:56.320145  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:56.595430  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:56.595805  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:56.696145  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:56.820044  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:57.095365  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:57.095447  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:57.182263  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:57.320444  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:57.595068  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:57.596035  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:57.681992  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:57.819956  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:58.096077  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:58.096349  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:58.181223  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:58.320961  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:58.595689  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:58.595970  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:58.681039  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:58.820274  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:59.096220  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:59.096498  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:59.181277  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:59.323101  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:28:59.595357  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:28:59.596942  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:28:59.682090  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:28:59.711157  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:28:59.822400  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:00.096862  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:00.096894  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:00.182307  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:00.323257  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:00.597976  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:00.599771  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 22:29:00.620987  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:00.621027  113968 retry.go:31] will retry after 8.671031791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:00.686271  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:00.820584  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:01.095643  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:01.095707  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:01.181049  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:01.407377  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:01.595805  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:01.595975  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:01.680416  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:01.821340  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:02.095160  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:02.095487  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:02.182120  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:02.320071  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:02.594429  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:02.596766  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:02.682627  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:02.820010  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:03.097624  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:03.102502  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:03.181707  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:03.322843  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:03.598004  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:03.598019  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:03.681650  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:03.822382  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:04.095195  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:04.095365  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:04.187171  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:04.320901  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:04.597366  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:04.597453  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 22:29:04.682694  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:04.822308  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:05.095547  113968 kapi.go:107] duration metric: took 33.504371805s to wait for kubernetes.io/minikube-addons=registry ...
	I0903 22:29:05.097190  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:05.184326  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:05.320462  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:05.840328  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:05.842301  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:05.842299  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:06.097064  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:06.181770  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:06.320283  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:06.596513  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:06.681725  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:06.820430  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:07.096200  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:07.181574  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:07.320810  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:07.595758  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:07.682099  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:07.821463  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:08.097957  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:08.186159  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:08.320778  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:08.597679  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:08.682378  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:08.822690  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:09.096934  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:09.183278  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:09.293011  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:29:09.322783  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:09.599674  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:09.684670  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:09.821198  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:10.098757  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:10.185961  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:10.321116  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:10.385826  113968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.092763033s)
	W0903 22:29:10.385882  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:10.385907  113968 retry.go:31] will retry after 14.394810883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:10.595821  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:10.683655  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:10.819325  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:11.097222  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:11.181950  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:11.322152  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:11.598774  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:11.681709  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:11.822483  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:12.097546  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:12.183630  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:12.319957  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:12.596917  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:12.681152  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:12.820573  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:13.095523  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:13.182651  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:13.319717  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:13.596774  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:13.683373  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:13.822105  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:14.098012  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:14.182571  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:14.320645  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:14.597349  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:14.683384  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:14.820908  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:15.097144  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:15.186391  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:15.325096  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:16.057332  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:16.059684  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:16.059725  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:16.096553  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:16.184246  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:16.321218  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:16.596992  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:16.681342  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:16.822471  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:17.098492  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:17.181638  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:17.320446  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:17.595944  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:17.681230  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:17.826982  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:18.241594  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:18.241798  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:18.322324  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:18.598135  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:18.681343  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:18.821081  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:19.100477  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:19.182842  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:19.320440  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:19.596547  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:19.681810  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:19.819950  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:20.095112  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:20.181513  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:20.320593  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:20.596837  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:20.681344  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:20.820058  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:21.095805  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:21.181075  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:21.319922  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:21.596147  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:21.681537  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:21.820129  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:22.096091  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:22.181924  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:22.320769  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:22.595918  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:22.682607  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:22.821002  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:23.096654  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:23.184128  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:23.321305  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:23.596823  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:23.684821  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:23.820155  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:24.095972  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:24.181483  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:24.322216  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:24.596026  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:24.680936  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:24.780983  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:29:24.819936  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:25.095496  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:25.181823  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:25.324332  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 22:29:25.518437  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:25.518466  113968 retry.go:31] will retry after 27.028407273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:25.597431  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:25.682755  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:25.820359  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:26.099531  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:26.184020  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:26.321754  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:26.596260  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:26.682326  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:26.821452  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:27.097904  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:27.182825  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:27.321546  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:27.596987  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:27.682884  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:27.822097  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:28.097496  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:28.183260  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:28.320829  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:28.595016  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:28.683413  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:28.821197  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:29.367774  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:29.369981  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:29.370063  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:29.597071  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:29.685283  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:29.820491  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:30.195286  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:30.195402  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:30.322343  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:30.595522  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:30.683551  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:30.823242  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:31.096472  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:31.184562  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:31.322061  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:31.608136  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:31.684162  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:31.820786  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:32.095845  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:32.195615  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:32.319686  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:32.598490  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:32.685631  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:32.820340  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:33.096227  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:33.184372  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:33.320103  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:33.595410  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:33.682112  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:33.821246  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:34.095995  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:34.181456  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:34.320558  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:34.601116  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:34.683510  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:34.819904  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:35.095836  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:35.182654  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:35.320376  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:35.615786  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:35.684334  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:35.822633  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:36.096643  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:36.180703  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:36.323095  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:36.599087  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:36.681904  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:36.820794  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:37.099724  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:37.201514  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:37.320800  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:37.598605  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:37.684212  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:37.821252  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:38.098868  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:38.184535  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:38.321801  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:38.596398  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:38.864784  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:38.864985  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:39.097103  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:39.198138  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:39.319906  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:39.597672  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:39.681753  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:39.824274  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:40.096191  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:40.182722  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:40.320329  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:40.597927  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:40.681967  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:40.822196  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:41.096704  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:41.181998  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:41.320121  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:41.596692  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:41.682504  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:41.821107  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:42.098064  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:42.199343  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:42.322137  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:42.694109  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:42.694362  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:42.820931  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:43.095304  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:43.183137  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:43.322932  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:43.595072  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:43.686270  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:43.821938  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:44.095689  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:44.196495  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:44.319848  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:44.596515  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:44.683975  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:44.823260  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:45.096849  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:45.183496  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 22:29:45.323180  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:45.598874  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:45.697118  113968 kapi.go:107] duration metric: took 1m13.019562769s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0903 22:29:45.820829  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:46.095191  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:46.320130  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:46.595825  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:46.820429  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:47.096603  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:47.321282  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:47.596316  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:47.822188  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:48.097067  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:48.320235  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:48.596238  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:48.820294  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:49.096146  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:49.321110  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:49.596648  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:49.819410  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:50.096133  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:50.320151  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:50.595699  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:50.822511  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:51.096159  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:51.320227  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:51.595728  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:51.819578  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:52.097798  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:52.320501  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:52.547928  113968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 22:29:52.596481  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:52.820597  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:53.096153  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 22:29:53.201606  113968 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 22:29:53.201754  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:29:53.201777  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:29:53.202030  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:29:53.202054  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 22:29:53.202065  113968 main.go:141] libmachine: Making call to close driver server
	I0903 22:29:53.202067  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:29:53.202073  113968 main.go:141] libmachine: (addons-389176) Calling .Close
	I0903 22:29:53.202279  113968 main.go:141] libmachine: (addons-389176) DBG | Closing plugin on server side
	I0903 22:29:53.202280  113968 main.go:141] libmachine: Successfully made call to close driver server
	I0903 22:29:53.202304  113968 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 22:29:53.202404  113968 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0903 22:29:53.319887  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:53.595192  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:53.820070  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:54.095625  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:54.320433  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:54.597527  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:54.819908  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:55.095244  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:55.321057  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:55.596012  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:55.821141  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:56.095590  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:56.319776  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:56.596412  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:56.820630  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:57.097007  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:57.320436  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:57.596372  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:57.820450  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:58.096035  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:58.320097  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:58.596048  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:58.820161  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:59.096329  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:59.319895  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:29:59.595244  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:29:59.820528  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:00.095679  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:00.319786  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:00.595462  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:00.819811  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:01.096700  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:01.320772  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:01.595577  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:01.819841  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:02.095542  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:02.319872  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:02.595983  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:02.820311  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:03.096061  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:03.320479  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:03.595832  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:03.819729  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:04.095160  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:04.320061  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:04.597694  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:04.819947  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:05.095725  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:05.319586  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:05.595999  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:05.821377  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:06.095971  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:06.319643  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:06.596706  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:06.819573  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:07.096738  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:07.320183  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:07.596095  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:07.821268  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:08.095979  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:08.320415  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:08.595766  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:08.819633  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:09.096092  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:09.320067  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:09.595622  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:09.820581  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:10.096179  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:10.320583  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:10.596457  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:10.821210  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:11.096291  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:11.320432  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:11.596548  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:11.819527  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:12.097338  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:12.321215  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:12.595901  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:12.820741  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:13.095172  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:13.320403  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:13.596343  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:13.820841  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:14.095607  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:14.320183  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:14.595874  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:14.821241  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:15.095942  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:15.319704  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:15.596114  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:15.822405  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:16.095718  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:16.319613  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:16.596256  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:16.842534  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:17.096437  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:17.321007  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:17.595895  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:17.820759  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:18.095647  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:18.320850  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:18.595673  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:18.819487  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:19.096829  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:19.319598  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:19.596412  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:19.820729  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:20.095597  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:20.319679  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:20.596126  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:20.822055  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:21.096367  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:21.321187  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:21.596808  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:21.819905  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:22.095906  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:22.320457  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:22.596622  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:22.819967  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:23.096093  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:23.321065  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:23.595398  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:23.820280  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:24.096272  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:24.320920  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:24.595650  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:24.819669  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:25.097575  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:25.319965  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:25.595846  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:25.823683  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:26.095209  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:26.320141  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:26.597140  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:26.820314  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:27.096060  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:27.320674  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:27.596044  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:27.819929  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:28.095716  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:28.320586  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:28.596220  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:28.820326  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:29.096690  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:29.320316  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:29.596107  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:29.820655  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:30.096244  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:30.321840  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:30.596777  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:30.822779  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:31.095759  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:31.320061  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:31.595651  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:31.819939  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:32.095622  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:32.320418  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:32.596892  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:32.820075  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:33.097206  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:33.320226  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:33.596191  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:33.820509  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:34.096792  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:34.320385  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:34.596016  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:34.820231  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:35.096748  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:35.319690  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:35.596365  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:35.821084  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:36.096775  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:36.319528  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:36.597030  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:36.820101  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:37.096528  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:37.322140  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:37.595957  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:37.820094  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:38.096233  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:38.320846  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:38.595848  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:38.820253  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:39.096807  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:39.319785  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:39.595498  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:39.821091  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:40.096125  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:40.320279  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:40.596923  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:40.820858  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:41.096131  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:41.320650  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:41.596812  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:41.819889  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:42.096066  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:42.321303  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:42.595974  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:42.819733  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:43.096448  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:43.320140  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:43.595942  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:43.820225  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:44.095893  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:44.319693  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:44.596240  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:44.820341  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:45.096763  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:45.319697  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:45.595499  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:45.821394  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:46.096592  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:46.319651  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:46.596692  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:46.819484  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:47.096673  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:47.320407  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:47.595800  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:47.820991  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:48.095708  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:48.320596  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:48.597316  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:48.822880  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:49.095948  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:49.319629  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:49.598561  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:49.821218  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:50.096802  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:50.319396  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:50.595794  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:50.819644  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:51.100032  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:51.319908  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:51.598946  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:51.821141  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:52.100170  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:52.321261  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:52.596870  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:52.821461  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:53.097933  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:53.321405  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:53.596701  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:53.819809  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:54.095541  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:54.321323  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:54.597568  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:54.824828  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:55.098679  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:55.319838  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:55.599261  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:55.825181  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:56.096904  113968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 22:30:56.320423  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:56.597235  113968 kapi.go:107] duration metric: took 2m25.005361366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0903 22:30:56.820250  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:57.320788  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:57.821143  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:58.322530  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:58.829808  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:59.320765  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:30:59.820699  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:31:00.321298  113968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 22:31:00.822093  113968 kapi.go:107] duration metric: took 2m26.005552434s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0903 22:31:00.823836  113968 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-389176 cluster.
	I0903 22:31:00.824982  113968 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0903 22:31:00.825973  113968 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0903 22:31:00.827105  113968 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner, metrics-server, ingress-dns, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0903 22:31:00.828217  113968 addons.go:514] duration metric: took 2m37.738650838s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner metrics-server ingress-dns cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0903 22:31:00.828264  113968 start.go:246] waiting for cluster config update ...
	I0903 22:31:00.828288  113968 start.go:255] writing updated cluster config ...
	I0903 22:31:00.828600  113968 ssh_runner.go:195] Run: rm -f paused
	I0903 22:31:00.835380  113968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 22:31:00.839537  113968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fhjd7" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.844497  113968 pod_ready.go:94] pod "coredns-66bc5c9577-fhjd7" is "Ready"
	I0903 22:31:00.844526  113968 pod_ready.go:86] duration metric: took 4.965208ms for pod "coredns-66bc5c9577-fhjd7" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.847499  113968 pod_ready.go:83] waiting for pod "etcd-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.852149  113968 pod_ready.go:94] pod "etcd-addons-389176" is "Ready"
	I0903 22:31:00.852173  113968 pod_ready.go:86] duration metric: took 4.657011ms for pod "etcd-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.853977  113968 pod_ready.go:83] waiting for pod "kube-apiserver-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.861175  113968 pod_ready.go:94] pod "kube-apiserver-addons-389176" is "Ready"
	I0903 22:31:00.861193  113968 pod_ready.go:86] duration metric: took 7.193676ms for pod "kube-apiserver-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:00.863672  113968 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:01.240009  113968 pod_ready.go:94] pod "kube-controller-manager-addons-389176" is "Ready"
	I0903 22:31:01.240049  113968 pod_ready.go:86] duration metric: took 376.358546ms for pod "kube-controller-manager-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:01.441769  113968 pod_ready.go:83] waiting for pod "kube-proxy-6lnvs" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:01.839531  113968 pod_ready.go:94] pod "kube-proxy-6lnvs" is "Ready"
	I0903 22:31:01.839558  113968 pod_ready.go:86] duration metric: took 397.753959ms for pod "kube-proxy-6lnvs" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:02.040406  113968 pod_ready.go:83] waiting for pod "kube-scheduler-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:02.440058  113968 pod_ready.go:94] pod "kube-scheduler-addons-389176" is "Ready"
	I0903 22:31:02.440085  113968 pod_ready.go:86] duration metric: took 399.648151ms for pod "kube-scheduler-addons-389176" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 22:31:02.440096  113968 pod_ready.go:40] duration metric: took 1.60468724s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 22:31:02.483188  113968 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 22:31:02.485018  113968 out.go:179] * Done! kubectl is now configured to use "addons-389176" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.514771098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2b937b4-a537-49a0-95fb-375592e198d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.515103250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ab7d583f028fe7d63fca111a55647ee10ca5818d25b9563dabdd882d2519b23,PodSandboxId:41650b8a598a080b10e1bf3c667ff658d54a65609dc902d30691e4c949150185,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756938703683667033,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abfe0e8a-d948-49f7-a8d4-d4af5a5f1495,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc599c49299db3c40284bffdb4bafc78b985f3419fb4528360cf5f5f81f443e,PodSandboxId:6ca3909db405d6e378d372dcf6bbeab4ad6976c200e07a96ece5e6762a69effb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756938666727770083,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3a05009-9dab-4e77-ae8d-565eb5fedd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95eaef998432b7c63170dcd51fc1747b633b68980057779404962410a31e93e,PodSandboxId:5aa3715db0895390a2c37795b566e1a1f78733a7ef41a2e113e79142b6c9274a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756938655554549892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6dv87,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51d52515-230c-477b-a6d9-ec97c2ba7707,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da3c07761992bbf2aeca1f1fcab3876f66273f8c463fb0b400e2c25f9daa2fca,PodSandboxId:31ee385978088cb985cf841b306878cbcd9b3e0df57668052d973a22f6656fed,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756938587662362102,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f68v4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b81bcbd3-1be1-436a-a211-9ad4215e0e5a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65119b144346ca60865b138d54c86cfb3d140eba6b38e6f3e6cb78160d0fa47,PodSandboxId:f5e42f2b2f06762e8c50f219afb2a657a62b6d1e799ed44d8a39d60c517f1596,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756938575811768926,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxb2s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: beba99ad-7ba4-4b34-a1a2-274b441aa7d2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bda341f230430af14af579406d4511c047b5021a55504d1bf90e20a91211445,PodSandboxId:6303dac2d351b6cab897e7bc302102fbb9d76b168a9ff5286bc664e604811f47,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756938570289969269,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bm5l4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 2f9eccea-ea75-4a9b-9fd6-ee1c0042454b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ce9d1e1b486d1a21d68133219153c17d7297ba0d400bfc6a318b90f06df3ea,PodSandboxId:a7b92659dabaa8afedcba2ad62e1db5d70ec9bee21f8395e00164c6338b53428,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756938556171305908,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bceeeeac-596d-4b1e-8ae1-ca4f3830e59c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9655328810b78a6f6a280c6895f6a04af2fa1b5c7b31da91b2822e059370b2aa,PodSandboxId:8680b30e788e13afe6aa7a0ee1f43f7dc6b2d3d18bd0681452bb4b7f26c10427,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756938529960457898,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-p5trw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7710252-13ae-493d-94b9-a0fc2013d283,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae2499c31ec8a770937ee0bbcc688ee6a29698124d38b100c3edd23913fa155,PodSandboxId:335eea715e85bfede6502e1a46a8f56f32da03b5969548e949b0c4a40a6faf1c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756938510392046832,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e54e9e-c1b3-454f-a505-17fb0f986291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eeab5d7870fc47b5b4cbbb9e8f8124c30c84e8b418ce270647a3da86d038d34,PodSandboxId:a4fd80b916375e396fda60101eafe184a21f39f5e495c97af68248792918f163,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756938504727360772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fhjd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d05fa59-7212-458e-b74b-c9f2803d2a69,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88411ff4cc37350199fd30771bec1659a649593a5ae7d5076c3ef260e553c71,PodSandboxId:88bf13c0becde20d600ed97b0e01bea6f977759a488be4a24e44808e14a08185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756938504201601729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6lnvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058b2a43-7f98-4c22-a7f3-6e6ce78ef135,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950ca1e7ea313fa25a2f2cdcb4a5ed02bc8cf29ed215f5ab72b3593bde0d9454,PodSandboxId:377645d7b8f3ca85371adede0deb3cafb2abd7a09211f66408e90077413c1030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756938492550451133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd4777d041de10bdb74570c36f2229c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080fecfad6b312d583585f69aba7c7a2f031e9b056d294d5c16e1662c09d86de,PodSandboxId:69f2f1ab139f2802033655382be772393e845fb4778debb19d0f770dfa01678f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756938492560745510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c617302b7c7965f0ccc181f5735f8f,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.co
ntainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3ebea097f7bc3ec9333eae29d8f6e1a05468368f83f94be79835b0a26135b2,PodSandboxId:3860cdd62ce75fdb4967acd6fc33ee358e1571ffdcbf059f3f65aa69fd788059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756938492521702056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0c4204cd0117452109262f7ad5a08e28,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fabcded7ce22ce2ca6a78471c77e6fbf3252cf2a397e30f2d99a5fac105aa5,PodSandboxId:1f800d40485280b2b86c5e052bd4def9bbf2af390f10b5ddf8d0d32a2461e04c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756938492517585283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339689c5776149b55fff9fc18f6147d8,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2b937b4-a537-49a0-95fb-375592e198d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.541349141Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=80cf1de2-1bb7-4269-bfa0-a1e17837a70f name=/runtime.v1.RuntimeService/ExecSync
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.541512669Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=80cf1de2-1bb7-4269-bfa0-a1e17837a70f name=/runtime.v1.RuntimeService/ExecSync
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.562780463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37da4512-3eda-4f11-9f49-f0ecd1e88f13 name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.562855066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37da4512-3eda-4f11-9f49-f0ecd1e88f13 name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.563799251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3058ff2-f286-4a49-908e-877f109c76c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.565816098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756938846565790316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3058ff2-f286-4a49-908e-877f109c76c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.566668588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8630f88a-d3cc-43ee-8583-1ef0ea22d49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.566814253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8630f88a-d3cc-43ee-8583-1ef0ea22d49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.567300943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ab7d583f028fe7d63fca111a55647ee10ca5818d25b9563dabdd882d2519b23,PodSandboxId:41650b8a598a080b10e1bf3c667ff658d54a65609dc902d30691e4c949150185,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756938703683667033,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abfe0e8a-d948-49f7-a8d4-d4af5a5f1495,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc599c49299db3c40284bffdb4bafc78b985f3419fb4528360cf5f5f81f443e,PodSandboxId:6ca3909db405d6e378d372dcf6bbeab4ad6976c200e07a96ece5e6762a69effb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756938666727770083,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3a05009-9dab-4e77-ae8d-565eb5fedd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95eaef998432b7c63170dcd51fc1747b633b68980057779404962410a31e93e,PodSandboxId:5aa3715db0895390a2c37795b566e1a1f78733a7ef41a2e113e79142b6c9274a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756938655554549892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6dv87,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51d52515-230c-477b-a6d9-ec97c2ba7707,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da3c07761992bbf2aeca1f1fcab3876f66273f8c463fb0b400e2c25f9daa2fca,PodSandboxId:31ee385978088cb985cf841b306878cbcd9b3e0df57668052d973a22f6656fed,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756938587662362102,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f68v4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b81bcbd3-1be1-436a-a211-9ad4215e0e5a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65119b144346ca60865b138d54c86cfb3d140eba6b38e6f3e6cb78160d0fa47,PodSandboxId:f5e42f2b2f06762e8c50f219afb2a657a62b6d1e799ed44d8a39d60c517f1596,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756938575811768926,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxb2s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: beba99ad-7ba4-4b34-a1a2-274b441aa7d2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bda341f230430af14af579406d4511c047b5021a55504d1bf90e20a91211445,PodSandboxId:6303dac2d351b6cab897e7bc302102fbb9d76b168a9ff5286bc664e604811f47,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756938570289969269,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bm5l4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 2f9eccea-ea75-4a9b-9fd6-ee1c0042454b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ce9d1e1b486d1a21d68133219153c17d7297ba0d400bfc6a318b90f06df3ea,PodSandboxId:a7b92659dabaa8afedcba2ad62e1db5d70ec9bee21f8395e00164c6338b53428,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756938556171305908,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bceeeeac-596d-4b1e-8ae1-ca4f3830e59c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9655328810b78a6f6a280c6895f6a04af2fa1b5c7b31da91b2822e059370b2aa,PodSandboxId:8680b30e788e13afe6aa7a0ee1f43f7dc6b2d3d18bd0681452bb4b7f26c10427,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756938529960457898,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-p5trw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7710252-13ae-493d-94b9-a0fc2013d283,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae2499c31ec8a770937ee0bbcc688ee6a29698124d38b100c3edd23913fa155,PodSandboxId:335eea715e85bfede6502e1a46a8f56f32da03b5969548e949b0c4a40a6faf1c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756938510392046832,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e54e9e-c1b3-454f-a505-17fb0f986291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eeab5d7870fc47b5b4cbbb9e8f8124c30c84e8b418ce270647a3da86d038d34,PodSandboxId:a4fd80b916375e396fda60101eafe184a21f39f5e495c97af68248792918f163,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756938504727360772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fhjd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d05fa59-7212-458e-b74b-c9f2803d2a69,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88411ff4cc37350199fd30771bec1659a649593a5ae7d5076c3ef260e553c71,PodSandboxId:88bf13c0becde20d600ed97b0e01bea6f977759a488be4a24e44808e14a08185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756938504201601729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6lnvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058b2a43-7f98-4c22-a7f3-6e6ce78ef135,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950ca1e7ea313fa25a2f2cdcb4a5ed02bc8cf29ed215f5ab72b3593bde0d9454,PodSandboxId:377645d7b8f3ca85371adede0deb3cafb2abd7a09211f66408e90077413c1030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756938492550451133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd4777d041de10bdb74570c36f2229c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080fecfad6b312d583585f69aba7c7a2f031e9b056d294d5c16e1662c09d86de,PodSandboxId:69f2f1ab139f2802033655382be772393e845fb4778debb19d0f770dfa01678f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756938492560745510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c617302b7c7965f0ccc181f5735f8f,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.co
ntainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3ebea097f7bc3ec9333eae29d8f6e1a05468368f83f94be79835b0a26135b2,PodSandboxId:3860cdd62ce75fdb4967acd6fc33ee358e1571ffdcbf059f3f65aa69fd788059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756938492521702056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0c4204cd0117452109262f7ad5a08e28,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fabcded7ce22ce2ca6a78471c77e6fbf3252cf2a397e30f2d99a5fac105aa5,PodSandboxId:1f800d40485280b2b86c5e052bd4def9bbf2af390f10b5ddf8d0d32a2461e04c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756938492517585283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339689c5776149b55fff9fc18f6147d8,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8630f88a-d3cc-43ee-8583-1ef0ea22d49e name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.607878278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e04a105-161d-4e71-9029-5c4b3e94662c name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.608183230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e04a105-161d-4e71-9029-5c4b3e94662c name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.609537437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15a97346-dc82-41d1-9cc5-fa2099c1d331 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.610760936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756938846610737625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15a97346-dc82-41d1-9cc5-fa2099c1d331 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.611288419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1be1759f-5778-43a2-864e-0995617a7e83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.611362141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1be1759f-5778-43a2-864e-0995617a7e83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.611733035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ab7d583f028fe7d63fca111a55647ee10ca5818d25b9563dabdd882d2519b23,PodSandboxId:41650b8a598a080b10e1bf3c667ff658d54a65609dc902d30691e4c949150185,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756938703683667033,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abfe0e8a-d948-49f7-a8d4-d4af5a5f1495,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc599c49299db3c40284bffdb4bafc78b985f3419fb4528360cf5f5f81f443e,PodSandboxId:6ca3909db405d6e378d372dcf6bbeab4ad6976c200e07a96ece5e6762a69effb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756938666727770083,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3a05009-9dab-4e77-ae8d-565eb5fedd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95eaef998432b7c63170dcd51fc1747b633b68980057779404962410a31e93e,PodSandboxId:5aa3715db0895390a2c37795b566e1a1f78733a7ef41a2e113e79142b6c9274a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756938655554549892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6dv87,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51d52515-230c-477b-a6d9-ec97c2ba7707,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da3c07761992bbf2aeca1f1fcab3876f66273f8c463fb0b400e2c25f9daa2fca,PodSandboxId:31ee385978088cb985cf841b306878cbcd9b3e0df57668052d973a22f6656fed,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756938587662362102,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f68v4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b81bcbd3-1be1-436a-a211-9ad4215e0e5a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65119b144346ca60865b138d54c86cfb3d140eba6b38e6f3e6cb78160d0fa47,PodSandboxId:f5e42f2b2f06762e8c50f219afb2a657a62b6d1e799ed44d8a39d60c517f1596,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756938575811768926,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxb2s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: beba99ad-7ba4-4b34-a1a2-274b441aa7d2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bda341f230430af14af579406d4511c047b5021a55504d1bf90e20a91211445,PodSandboxId:6303dac2d351b6cab897e7bc302102fbb9d76b168a9ff5286bc664e604811f47,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756938570289969269,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bm5l4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 2f9eccea-ea75-4a9b-9fd6-ee1c0042454b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ce9d1e1b486d1a21d68133219153c17d7297ba0d400bfc6a318b90f06df3ea,PodSandboxId:a7b92659dabaa8afedcba2ad62e1db5d70ec9bee21f8395e00164c6338b53428,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756938556171305908,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bceeeeac-596d-4b1e-8ae1-ca4f3830e59c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9655328810b78a6f6a280c6895f6a04af2fa1b5c7b31da91b2822e059370b2aa,PodSandboxId:8680b30e788e13afe6aa7a0ee1f43f7dc6b2d3d18bd0681452bb4b7f26c10427,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756938529960457898,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-p5trw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7710252-13ae-493d-94b9-a0fc2013d283,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae2499c31ec8a770937ee0bbcc688ee6a29698124d38b100c3edd23913fa155,PodSandboxId:335eea715e85bfede6502e1a46a8f56f32da03b5969548e949b0c4a40a6faf1c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756938510392046832,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e54e9e-c1b3-454f-a505-17fb0f986291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eeab5d7870fc47b5b4cbbb9e8f8124c30c84e8b418ce270647a3da86d038d34,PodSandboxId:a4fd80b916375e396fda60101eafe184a21f39f5e495c97af68248792918f163,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756938504727360772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fhjd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d05fa59-7212-458e-b74b-c9f2803d2a69,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88411ff4cc37350199fd30771bec1659a649593a5ae7d5076c3ef260e553c71,PodSandboxId:88bf13c0becde20d600ed97b0e01bea6f977759a488be4a24e44808e14a08185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756938504201601729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6lnvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058b2a43-7f98-4c22-a7f3-6e6ce78ef135,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950ca1e7ea313fa25a2f2cdcb4a5ed02bc8cf29ed215f5ab72b3593bde0d9454,PodSandboxId:377645d7b8f3ca85371adede0deb3cafb2abd7a09211f66408e90077413c1030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756938492550451133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd4777d041de10bdb74570c36f2229c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080fecfad6b312d583585f69aba7c7a2f031e9b056d294d5c16e1662c09d86de,PodSandboxId:69f2f1ab139f2802033655382be772393e845fb4778debb19d0f770dfa01678f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756938492560745510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c617302b7c7965f0ccc181f5735f8f,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.co
ntainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3ebea097f7bc3ec9333eae29d8f6e1a05468368f83f94be79835b0a26135b2,PodSandboxId:3860cdd62ce75fdb4967acd6fc33ee358e1571ffdcbf059f3f65aa69fd788059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756938492521702056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0c4204cd0117452109262f7ad5a08e28,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fabcded7ce22ce2ca6a78471c77e6fbf3252cf2a397e30f2d99a5fac105aa5,PodSandboxId:1f800d40485280b2b86c5e052bd4def9bbf2af390f10b5ddf8d0d32a2461e04c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756938492517585283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339689c5776149b55fff9fc18f6147d8,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1be1759f-5778-43a2-864e-0995617a7e83 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.646803063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fea6e4e3-5880-42ce-9491-158e0c3ab15b name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.646899806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fea6e4e3-5880-42ce-9491-158e0c3ab15b name=/runtime.v1.RuntimeService/Version
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.648321053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=753ecd1b-0b39-44a4-83c5-4ce6aa102675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.651874016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756938846651802793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=753ecd1b-0b39-44a4-83c5-4ce6aa102675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.652680821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb24d76e-5721-46f6-bcde-42d0b0e9f58b name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.652782467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb24d76e-5721-46f6-bcde-42d0b0e9f58b name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 22:34:06 addons-389176 crio[825]: time="2025-09-03 22:34:06.653284313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ab7d583f028fe7d63fca111a55647ee10ca5818d25b9563dabdd882d2519b23,PodSandboxId:41650b8a598a080b10e1bf3c667ff658d54a65609dc902d30691e4c949150185,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1756938703683667033,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: abfe0e8a-d948-49f7-a8d4-d4af5a5f1495,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc599c49299db3c40284bffdb4bafc78b985f3419fb4528360cf5f5f81f443e,PodSandboxId:6ca3909db405d6e378d372dcf6bbeab4ad6976c200e07a96ece5e6762a69effb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1756938666727770083,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f3a05009-9dab-4e77-ae8d-565eb5fedd3d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b95eaef998432b7c63170dcd51fc1747b633b68980057779404962410a31e93e,PodSandboxId:5aa3715db0895390a2c37795b566e1a1f78733a7ef41a2e113e79142b6c9274a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1756938655554549892,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-6dv87,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51d52515-230c-477b-a6d9-ec97c2ba7707,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:da3c07761992bbf2aeca1f1fcab3876f66273f8c463fb0b400e2c25f9daa2fca,PodSandboxId:31ee385978088cb985cf841b306878cbcd9b3e0df57668052d973a22f6656fed,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1756938587662362102,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f68v4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b81bcbd3-1be1-436a-a211-9ad4215e0e5a,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f65119b144346ca60865b138d54c86cfb3d140eba6b38e6f3e6cb78160d0fa47,PodSandboxId:f5e42f2b2f06762e8c50f219afb2a657a62b6d1e799ed44d8a39d60c517f1596,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1756938575811768926,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-bxb2s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: beba99ad-7ba4-4b34-a1a2-274b441aa7d2,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bda341f230430af14af579406d4511c047b5021a55504d1bf90e20a91211445,PodSandboxId:6303dac2d351b6cab897e7bc302102fbb9d76b168a9ff5286bc664e604811f47,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c08
45d92124611df4baa18fd96892b107dd9c28f6178793e020b10264622c27b,State:CONTAINER_RUNNING,CreatedAt:1756938570289969269,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bm5l4,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 2f9eccea-ea75-4a9b-9fd6-ee1c0042454b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e2c4a14,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ce9d1e1b486d1a21d68133219153c17d7297ba0d400bfc6a318b90f06df3ea,PodSandboxId:a7b92659dabaa8afedcba2ad62e1db5d70ec9bee21f8395e00164c6338b53428,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1756938556171305908,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bceeeeac-596d-4b1e-8ae1-ca4f3830e59c,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9655328810b78a6f6a280c6895f6a04af2fa1b5c7b31da91b2822e059370b2aa,PodSandboxId:8680b30e788e13afe6aa7a0ee1f43f7dc6b2d3d18bd0681452bb4b7f26c10427,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1756938529960457898,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-p5trw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7710252-13ae-493d-94b9-a0fc2013d283,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ae2499c31ec8a770937ee0bbcc688ee6a29698124d38b100c3edd23913fa155,PodSandboxId:335eea715e85bfede6502e1a46a8f56f32da03b5969548e949b0c4a40a6faf1c,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756938510392046832,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8e54e9e-c1b3-454f-a505-17fb0f986291,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eeab5d7870fc47b5b4cbbb9e8f8124c30c84e8b418ce270647a3da86d038d34,PodSandboxId:a4fd80b916375e396fda60101eafe184a21f39f5e495c97af68248792918f163,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1756938504727360772,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fhjd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d05fa59-7212-458e-b74b-c9f2803d2a69,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88411ff4cc37350199fd30771bec1659a649593a5ae7d5076c3ef260e553c71,PodSandboxId:88bf13c0becde20d600ed97b0e01bea6f977759a488be4a24e44808e14a08185,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1756938504201601729,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6lnvs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058b2a43-7f98-4c22-a7f3-6e6ce78ef135,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950ca1e7ea313fa25a2f2cdcb4a5ed02bc8cf29ed215f5ab72b3593bde0d9454,PodSandboxId:377645d7b8f3ca85371adede0deb3cafb2abd7a09211f66408e90077413c1030,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1756938492550451133,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd4777d041de10bdb74570c36f2229c,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080fecfad6b312d583585f69aba7c7a2f031e9b056d294d5c16e1662c09d86de,PodSandboxId:69f2f1ab139f2802033655382be772393e845fb4778debb19d0f770dfa01678f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1756938492560745510,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c617302b7c7965f0ccc181f5735f8f,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.co
ntainer.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3ebea097f7bc3ec9333eae29d8f6e1a05468368f83f94be79835b0a26135b2,PodSandboxId:3860cdd62ce75fdb4967acd6fc33ee358e1571ffdcbf059f3f65aa69fd788059,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1756938492521702056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 0c4204cd0117452109262f7ad5a08e28,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06fabcded7ce22ce2ca6a78471c77e6fbf3252cf2a397e30f2d99a5fac105aa5,PodSandboxId:1f800d40485280b2b86c5e052bd4def9bbf2af390f10b5ddf8d0d32a2461e04c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1756938492517585283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-addons-389176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 339689c5776149b55fff9fc18f6147d8,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb24d76e-5721-46f6-bcde-42d0b0e9f58b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8ab7d583f028f       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   41650b8a598a0       nginx
	ccc599c49299d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6ca3909db405d       busybox
	b95eaef998432       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   5aa3715db0895       ingress-nginx-controller-9cc49f96f-6dv87
	da3c07761992b       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     2                   31ee385978088       ingress-nginx-admission-patch-f68v4
	f65119b144346       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   f5e42f2b2f067       ingress-nginx-admission-create-bxb2s
	8bda341f23043       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:5768867a6776f266b9c9c6b8b32a069f9346493f7bede50c3dcb28859f36d506            4 minutes ago       Running             gadget                    0                   6303dac2d351b       gadget-bm5l4
	d3ce9d1e1b486       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   a7b92659dabaa       kube-ingress-dns-minikube
	9655328810b78       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   8680b30e788e1       amd-gpu-device-plugin-p5trw
	6ae2499c31ec8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   335eea715e85b       storage-provisioner
	2eeab5d7870fc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   a4fd80b916375       coredns-66bc5c9577-fhjd7
	e88411ff4cc37       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   88bf13c0becde       kube-proxy-6lnvs
	080fecfad6b31       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   69f2f1ab139f2       kube-apiserver-addons-389176
	950ca1e7ea313       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   377645d7b8f3c       etcd-addons-389176
	1c3ebea097f7b       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   3860cdd62ce75       kube-controller-manager-addons-389176
	06fabcded7ce2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   1f800d4048528       kube-scheduler-addons-389176
	
	
	==> coredns [2eeab5d7870fc47b5b4cbbb9e8f8124c30c84e8b418ce270647a3da86d038d34] <==
	[INFO] 10.244.0.8:49164 - 25538 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001326285s
	[INFO] 10.244.0.8:49164 - 38805 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00031523s
	[INFO] 10.244.0.8:49164 - 46349 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00014145s
	[INFO] 10.244.0.8:49164 - 56696 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000149188s
	[INFO] 10.244.0.8:49164 - 64077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000571607s
	[INFO] 10.244.0.8:49164 - 36125 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000100123s
	[INFO] 10.244.0.8:49164 - 58581 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000469312s
	[INFO] 10.244.0.8:40023 - 1575 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124719s
	[INFO] 10.244.0.8:40023 - 1858 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198839s
	[INFO] 10.244.0.8:50819 - 19711 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000176654s
	[INFO] 10.244.0.8:50819 - 19454 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00022128s
	[INFO] 10.244.0.8:57801 - 22171 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000117042s
	[INFO] 10.244.0.8:57801 - 21876 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000196196s
	[INFO] 10.244.0.8:47746 - 51010 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088113s
	[INFO] 10.244.0.8:47746 - 50780 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121255s
	[INFO] 10.244.0.23:34649 - 45179 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00067238s
	[INFO] 10.244.0.23:37481 - 2777 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001046036s
	[INFO] 10.244.0.23:44335 - 54980 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117925s
	[INFO] 10.244.0.23:39978 - 11847 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00027058s
	[INFO] 10.244.0.23:59833 - 64882 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102007s
	[INFO] 10.244.0.23:53446 - 30101 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091523s
	[INFO] 10.244.0.23:60642 - 24806 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.004433854s
	[INFO] 10.244.0.23:56069 - 52543 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002969495s
	[INFO] 10.244.0.27:42175 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000368677s
	[INFO] 10.244.0.27:35571 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000757018s
	
	
	==> describe nodes <==
	Name:               addons-389176
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-389176
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=addons-389176
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_03T22_28_18_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-389176
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 22:28:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-389176
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 22:34:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 22:32:23 +0000   Wed, 03 Sep 2025 22:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 22:32:23 +0000   Wed, 03 Sep 2025 22:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 22:32:23 +0000   Wed, 03 Sep 2025 22:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 22:32:23 +0000   Wed, 03 Sep 2025 22:28:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    addons-389176
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 5215b25379bb4b7093fd760d4df3a0d3
	  System UUID:                5215b253-79bb-4b70-93fd-760d4df3a0d3
	  Boot ID:                    0bc349f9-1167-4690-b382-d27eb0b4f334
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-8r8zg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-bm5l4                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-6dv87    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m35s
	  kube-system                 amd-gpu-device-plugin-p5trw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 coredns-66bc5c9577-fhjd7                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m43s
	  kube-system                 etcd-addons-389176                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m49s
	  kube-system                 kube-apiserver-addons-389176                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-addons-389176       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-proxy-6lnvs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kube-system                 kube-scheduler-addons-389176                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet          Node addons-389176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet          Node addons-389176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet          Node addons-389176 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s                  kubelet          Node addons-389176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s                  kubelet          Node addons-389176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s                  kubelet          Node addons-389176 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m48s                  kubelet          Node addons-389176 status is now: NodeReady
	  Normal  RegisteredNode           5m44s                  node-controller  Node addons-389176 event: Registered Node addons-389176 in Controller
	
	
	==> dmesg <==
	[  +0.000027] kauditd_printk_skb: 353 callbacks suppressed
	[  +0.243348] kauditd_printk_skb: 423 callbacks suppressed
	[ +11.651412] kauditd_printk_skb: 223 callbacks suppressed
	[  +9.195391] kauditd_printk_skb: 20 callbacks suppressed
	[Sep 3 22:29] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.010127] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.067454] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.058827] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.392631] kauditd_printk_skb: 65 callbacks suppressed
	[  +1.010057] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.038540] kauditd_printk_skb: 54 callbacks suppressed
	[Sep 3 22:30] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.849609] kauditd_printk_skb: 65 callbacks suppressed
	[Sep 3 22:31] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.000979] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.899120] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.631330] kauditd_printk_skb: 44 callbacks suppressed
	[  +1.472989] kauditd_printk_skb: 150 callbacks suppressed
	[  +0.727206] kauditd_printk_skb: 207 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 108 callbacks suppressed
	[  +7.365068] kauditd_printk_skb: 26 callbacks suppressed
	[Sep 3 22:32] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.919430] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.547296] kauditd_printk_skb: 93 callbacks suppressed
	[Sep 3 22:34] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [950ca1e7ea313fa25a2f2cdcb4a5ed02bc8cf29ed215f5ab72b3593bde0d9454] <==
	{"level":"info","ts":"2025-09-03T22:31:28.689632Z","caller":"traceutil/trace.go:172","msg":"trace[551565502] transaction","detail":"{read_only:false; response_revision:1470; number_of_response:1; }","duration":"162.289382ms","start":"2025-09-03T22:31:28.527096Z","end":"2025-09-03T22:31:28.689386Z","steps":["trace[551565502] 'process raft request'  (duration: 162.023494ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:29.767637Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.512717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T22:31:29.768080Z","caller":"traceutil/trace.go:172","msg":"trace[1006987987] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:0; response_revision:1490; }","duration":"135.961965ms","start":"2025-09-03T22:31:29.632102Z","end":"2025-09-03T22:31:29.768064Z","steps":["trace[1006987987] 'range keys from in-memory index tree'  (duration: 135.469208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:29.768324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.149361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T22:31:29.768348Z","caller":"traceutil/trace.go:172","msg":"trace[1611213928] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1490; }","duration":"126.177857ms","start":"2025-09-03T22:31:29.642164Z","end":"2025-09-03T22:31:29.768342Z","steps":["trace[1611213928] 'range keys from in-memory index tree'  (duration: 126.057266ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T22:31:35.436717Z","caller":"traceutil/trace.go:172","msg":"trace[1564073892] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"309.398519ms","start":"2025-09-03T22:31:35.127306Z","end":"2025-09-03T22:31:35.436704Z","steps":["trace[1564073892] 'process raft request'  (duration: 309.277928ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:35.436903Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-03T22:31:35.127286Z","time spent":"309.506921ms","remote":"127.0.0.1:48360","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4419,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-427058f3-6272-436c-9cfd-91031a1fcb72\" mod_revision:1536 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-427058f3-6272-436c-9cfd-91031a1fcb72\" value_size:4319 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-427058f3-6272-436c-9cfd-91031a1fcb72\" > >"}
	{"level":"info","ts":"2025-09-03T22:31:52.327830Z","caller":"traceutil/trace.go:172","msg":"trace[1848273599] linearizableReadLoop","detail":"{readStateIndex:1764; appliedIndex:1764; }","duration":"190.964908ms","start":"2025-09-03T22:31:52.136845Z","end":"2025-09-03T22:31:52.327809Z","steps":["trace[1848273599] 'read index received'  (duration: 190.958442ms)","trace[1848273599] 'applied index is now lower than readState.Index'  (duration: 5.659µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-03T22:31:52.327955Z","caller":"traceutil/trace.go:172","msg":"trace[1886985557] transaction","detail":"{read_only:false; response_revision:1691; number_of_response:1; }","duration":"200.493586ms","start":"2025-09-03T22:31:52.127451Z","end":"2025-09-03T22:31:52.327945Z","steps":["trace[1886985557] 'process raft request'  (duration: 200.389502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.328037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.186391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:2050"}
	{"level":"info","ts":"2025-09-03T22:31:52.328067Z","caller":"traceutil/trace.go:172","msg":"trace[1144586948] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1691; }","duration":"191.229302ms","start":"2025-09-03T22:31:52.136832Z","end":"2025-09-03T22:31:52.328061Z","steps":["trace[1144586948] 'agreement among raft nodes before linearized reading'  (duration: 191.115652ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.328281Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.649997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2025-09-03T22:31:52.328301Z","caller":"traceutil/trace.go:172","msg":"trace[1589493124] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1691; }","duration":"143.673397ms","start":"2025-09-03T22:31:52.184622Z","end":"2025-09-03T22:31:52.328295Z","steps":["trace[1589493124] 'agreement among raft nodes before linearized reading'  (duration: 143.566183ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.328386Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.669042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T22:31:52.328398Z","caller":"traceutil/trace.go:172","msg":"trace[1941465920] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1691; }","duration":"121.681741ms","start":"2025-09-03T22:31:52.206713Z","end":"2025-09-03T22:31:52.328394Z","steps":["trace[1941465920] 'agreement among raft nodes before linearized reading'  (duration: 121.659676ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.328515Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.511092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-427058f3-6272-436c-9cfd-91031a1fcb72\" limit:1 ","response":"range_response_count:1 size:1262"}
	{"level":"info","ts":"2025-09-03T22:31:52.328663Z","caller":"traceutil/trace.go:172","msg":"trace[1223632144] range","detail":"{range_begin:/registry/persistentvolumes/pvc-427058f3-6272-436c-9cfd-91031a1fcb72; range_end:; response_count:1; response_revision:1691; }","duration":"124.66129ms","start":"2025-09-03T22:31:52.203994Z","end":"2025-09-03T22:31:52.328656Z","steps":["trace[1223632144] 'agreement among raft nodes before linearized reading'  (duration: 124.475504ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T22:31:52.461364Z","caller":"traceutil/trace.go:172","msg":"trace[1190592522] linearizableReadLoop","detail":"{readStateIndex:1765; appliedIndex:1765; }","duration":"113.335127ms","start":"2025-09-03T22:31:52.347846Z","end":"2025-09-03T22:31:52.461181Z","steps":["trace[1190592522] 'read index received'  (duration: 113.323412ms)","trace[1190592522] 'applied index is now lower than readState.Index'  (duration: 5.757µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-03T22:31:52.466855Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.105415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-creds-764b6fb674-22jjq\" limit:1 ","response":"range_response_count:1 size:6048"}
	{"level":"info","ts":"2025-09-03T22:31:52.466900Z","caller":"traceutil/trace.go:172","msg":"trace[1990528699] range","detail":"{range_begin:/registry/pods/kube-system/registry-creds-764b6fb674-22jjq; range_end:; response_count:1; response_revision:1691; }","duration":"119.161384ms","start":"2025-09-03T22:31:52.347727Z","end":"2025-09-03T22:31:52.466889Z","steps":["trace[1990528699] 'agreement among raft nodes before linearized reading'  (duration: 113.819711ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-03T22:31:52.469841Z","caller":"traceutil/trace.go:172","msg":"trace[1339019476] transaction","detail":"{read_only:false; response_revision:1692; number_of_response:1; }","duration":"111.162404ms","start":"2025-09-03T22:31:52.358664Z","end":"2025-09-03T22:31:52.469827Z","steps":["trace[1339019476] 'process raft request'  (duration: 111.083509ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.470051Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.119714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T22:31:52.470073Z","caller":"traceutil/trace.go:172","msg":"trace[953077894] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1692; }","duration":"118.148154ms","start":"2025-09-03T22:31:52.351919Z","end":"2025-09-03T22:31:52.470067Z","steps":["trace[953077894] 'agreement among raft nodes before linearized reading'  (duration: 118.100056ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-03T22:31:52.866485Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.996392ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-03T22:31:52.866541Z","caller":"traceutil/trace.go:172","msg":"trace[433701222] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1696; }","duration":"176.09792ms","start":"2025-09-03T22:31:52.690433Z","end":"2025-09-03T22:31:52.866531Z","steps":["trace[433701222] 'range keys from in-memory index tree'  (duration: 175.962593ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:34:06 up 6 min,  0 users,  load average: 0.59, 0.90, 0.52
	Linux addons-389176 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [080fecfad6b312d583585f69aba7c7a2f031e9b056d294d5c16e1662c09d86de] <==
	E0903 22:31:14.445574       1 conn.go:339] Error on socket receive: read tcp 192.168.39.230:8443->192.168.39.1:33542: use of closed network connection
	I0903 22:31:23.569831       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.247.178"}
	I0903 22:31:39.098422       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0903 22:31:39.309794       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.75.84"}
	I0903 22:31:52.422032       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0903 22:31:57.444683       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0903 22:31:59.269834       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0903 22:32:11.776174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 22:32:18.825143       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0903 22:32:19.182156       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 22:32:19.188964       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 22:32:19.220859       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 22:32:19.221685       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 22:32:19.236961       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 22:32:19.237522       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 22:32:19.271481       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 22:32:19.271556       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 22:32:19.295899       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 22:32:19.295937       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0903 22:32:20.239447       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0903 22:32:20.296964       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0903 22:32:20.314674       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0903 22:33:17.563917       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 22:33:30.207098       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 22:34:05.389850       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.115.6"}
	
	
	==> kube-controller-manager [1c3ebea097f7bc3ec9333eae29d8f6e1a05468368f83f94be79835b0a26135b2] <==
	E0903 22:32:28.153768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:29.174770       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:29.175786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:36.625769       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:36.626907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:37.121424       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:37.122609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:40.154375       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:40.155540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:54.195012       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:54.195851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:54.519756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:54.520713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:32:56.301156       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:32:56.302097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:33:22.069087       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:33:22.070373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:33:24.513344       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:33:24.514857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:33:44.833604       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:33:44.834587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:33:58.186192       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:33:58.187113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 22:34:02.520976       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 22:34:02.522249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e88411ff4cc37350199fd30771bec1659a649593a5ae7d5076c3ef260e553c71] <==
	I0903 22:28:24.700827       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0903 22:28:24.814926       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0903 22:28:24.815822       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.230"]
	E0903 22:28:24.817341       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0903 22:28:25.118123       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0903 22:28:25.118173       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0903 22:28:25.118257       1 server_linux.go:132] "Using iptables Proxier"
	I0903 22:28:25.138275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0903 22:28:25.138816       1 server.go:527] "Version info" version="v1.34.0"
	I0903 22:28:25.138829       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 22:28:25.147647       1 config.go:200] "Starting service config controller"
	I0903 22:28:25.147660       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0903 22:28:25.147683       1 config.go:106] "Starting endpoint slice config controller"
	I0903 22:28:25.147688       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0903 22:28:25.147698       1 config.go:403] "Starting serviceCIDR config controller"
	I0903 22:28:25.147701       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0903 22:28:25.152242       1 config.go:309] "Starting node config controller"
	I0903 22:28:25.152264       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0903 22:28:25.152270       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0903 22:28:25.249414       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0903 22:28:25.249479       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0903 22:28:25.249551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [06fabcded7ce22ce2ca6a78471c77e6fbf3252cf2a397e30f2d99a5fac105aa5] <==
	E0903 22:28:15.240793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0903 22:28:15.240847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0903 22:28:15.240893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0903 22:28:15.240930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0903 22:28:15.241036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0903 22:28:15.241094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0903 22:28:15.241146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0903 22:28:15.241645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0903 22:28:15.242983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0903 22:28:15.244056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0903 22:28:15.244343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0903 22:28:15.244289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0903 22:28:15.244263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0903 22:28:15.245439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0903 22:28:16.122978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0903 22:28:16.199820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0903 22:28:16.223963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0903 22:28:16.264911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0903 22:28:16.365120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0903 22:28:16.415535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0903 22:28:16.415777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0903 22:28:16.434833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0903 22:28:16.447895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0903 22:28:16.535579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0903 22:28:18.819345       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 03 22:32:22 addons-389176 kubelet[1518]: I0903 22:32:22.404421    1518 scope.go:117] "RemoveContainer" containerID="f3243c11e4fad0b2a208ba92a5493f5402319d4996f9c3067de81270f8f2324d"
	Sep 03 22:32:22 addons-389176 kubelet[1518]: I0903 22:32:22.404944    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3243c11e4fad0b2a208ba92a5493f5402319d4996f9c3067de81270f8f2324d"} err="failed to get container status \"f3243c11e4fad0b2a208ba92a5493f5402319d4996f9c3067de81270f8f2324d\": rpc error: code = NotFound desc = could not find container \"f3243c11e4fad0b2a208ba92a5493f5402319d4996f9c3067de81270f8f2324d\": container with ID starting with f3243c11e4fad0b2a208ba92a5493f5402319d4996f9c3067de81270f8f2324d not found: ID does not exist"
	Sep 03 22:32:27 addons-389176 kubelet[1518]: E0903 22:32:27.852792    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938747852403683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:27 addons-389176 kubelet[1518]: E0903 22:32:27.852861    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938747852403683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:37 addons-389176 kubelet[1518]: E0903 22:32:37.858786    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938757858477057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:37 addons-389176 kubelet[1518]: E0903 22:32:37.858815    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938757858477057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:47 addons-389176 kubelet[1518]: E0903 22:32:47.862636    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938767862150241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:47 addons-389176 kubelet[1518]: E0903 22:32:47.862690    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938767862150241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:57 addons-389176 kubelet[1518]: E0903 22:32:57.866507    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938777866061948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:32:57 addons-389176 kubelet[1518]: E0903 22:32:57.866546    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938777866061948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:07 addons-389176 kubelet[1518]: E0903 22:33:07.869883    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938787869386224  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:07 addons-389176 kubelet[1518]: E0903 22:33:07.869920    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938787869386224  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:17 addons-389176 kubelet[1518]: E0903 22:33:17.872644    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938797872246658  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:17 addons-389176 kubelet[1518]: E0903 22:33:17.872670    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938797872246658  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:27 addons-389176 kubelet[1518]: E0903 22:33:27.875732    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938807875105142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:27 addons-389176 kubelet[1518]: E0903 22:33:27.875827    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938807875105142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:28 addons-389176 kubelet[1518]: I0903 22:33:28.641156    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 03 22:33:30 addons-389176 kubelet[1518]: I0903 22:33:30.640601    1518 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-p5trw" secret="" err="secret \"gcp-auth\" not found"
	Sep 03 22:33:37 addons-389176 kubelet[1518]: E0903 22:33:37.877646    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938817877372286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:37 addons-389176 kubelet[1518]: E0903 22:33:37.877670    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938817877372286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:47 addons-389176 kubelet[1518]: E0903 22:33:47.879656    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938827879258476  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:47 addons-389176 kubelet[1518]: E0903 22:33:47.879697    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938827879258476  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:57 addons-389176 kubelet[1518]: E0903 22:33:57.882838    1518 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756938837882482072  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:33:57 addons-389176 kubelet[1518]: E0903 22:33:57.883266    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756938837882482072  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 03 22:34:05 addons-389176 kubelet[1518]: I0903 22:34:05.380920    1518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kzt\" (UniqueName: \"kubernetes.io/projected/c3fd1d39-aaa2-4135-8d12-71e0b871e876-kube-api-access-t5kzt\") pod \"hello-world-app-5d498dc89-8r8zg\" (UID: \"c3fd1d39-aaa2-4135-8d12-71e0b871e876\") " pod="default/hello-world-app-5d498dc89-8r8zg"
	
	
	==> storage-provisioner [6ae2499c31ec8a770937ee0bbcc688ee6a29698124d38b100c3edd23913fa155] <==
	W0903 22:33:42.257388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:44.260401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:44.265539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:46.269642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:46.275327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:48.278916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:48.287531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:50.290693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:50.296401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:52.300509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:52.307587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:54.311310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:54.316327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:56.319571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:56.324941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:58.328594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:33:58.334272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:00.338262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:00.346450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:02.350016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:02.355856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:04.358872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:04.366753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:06.370559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 22:34:06.377104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-389176 -n addons-389176
helpers_test.go:269: (dbg) Run:  kubectl --context addons-389176 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-8r8zg ingress-nginx-admission-create-bxb2s ingress-nginx-admission-patch-f68v4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-389176 describe pod hello-world-app-5d498dc89-8r8zg ingress-nginx-admission-create-bxb2s ingress-nginx-admission-patch-f68v4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-389176 describe pod hello-world-app-5d498dc89-8r8zg ingress-nginx-admission-create-bxb2s ingress-nginx-admission-patch-f68v4: exit status 1 (67.997526ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-8r8zg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-389176/192.168.39.230
	Start Time:       Wed, 03 Sep 2025 22:34:05 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5kzt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t5kzt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-8r8zg to addons-389176
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bxb2s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f68v4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-389176 describe pod hello-world-app-5d498dc89-8r8zg ingress-nginx-admission-create-bxb2s ingress-nginx-admission-patch-f68v4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable ingress-dns --alsologtostderr -v=1: (1.254171183s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable ingress --alsologtostderr -v=1: (7.720107646s)
--- FAIL: TestAddons/parallel/Ingress (158.02s)

                                                
                                    
x
+
TestPreload (172.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-600653 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0903 23:21:03.161593  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-600653 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.253975907s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-600653 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-600653 image pull gcr.io/k8s-minikube/busybox: (3.333205158s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-600653
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-600653: (7.291264s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-600653 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-600653 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.505649912s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-600653 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-03 23:23:34.665052355 +0000 UTC m=+3401.633026902
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-600653 -n test-preload-600653
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-600653 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-600653 logs -n 25: (1.125749434s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-688539 ssh -n multinode-688539-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ ssh     │ multinode-688539 ssh -n multinode-688539 sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539.txt                                          │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ cp      │ multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt multinode-688539-m02:/home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ ssh     │ multinode-688539 ssh -n multinode-688539-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ ssh     │ multinode-688539 ssh -n multinode-688539-m02 sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt                                  │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ node    │ multinode-688539 node stop m03                                                                                                                            │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:09 UTC │
	│ node    │ multinode-688539 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ node    │ list -p multinode-688539                                                                                                                                  │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │                     │
	│ stop    │ -p multinode-688539                                                                                                                                       │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:12 UTC │
	│ start   │ -p multinode-688539 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:12 UTC │ 03 Sep 25 23:15 UTC │
	│ node    │ list -p multinode-688539                                                                                                                                  │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:15 UTC │                     │
	│ node    │ multinode-688539 node delete m03                                                                                                                          │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:15 UTC │ 03 Sep 25 23:15 UTC │
	│ stop    │ multinode-688539 stop                                                                                                                                     │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:15 UTC │ 03 Sep 25 23:18 UTC │
	│ start   │ -p multinode-688539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:18 UTC │ 03 Sep 25 23:19 UTC │
	│ node    │ list -p multinode-688539                                                                                                                                  │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:19 UTC │                     │
	│ start   │ -p multinode-688539-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-688539-m02 │ jenkins │ v1.36.0 │ 03 Sep 25 23:19 UTC │                     │
	│ start   │ -p multinode-688539-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-688539-m03 │ jenkins │ v1.36.0 │ 03 Sep 25 23:19 UTC │ 03 Sep 25 23:20 UTC │
	│ node    │ add -p multinode-688539                                                                                                                                   │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:20 UTC │                     │
	│ delete  │ -p multinode-688539-m03                                                                                                                                   │ multinode-688539-m03 │ jenkins │ v1.36.0 │ 03 Sep 25 23:20 UTC │ 03 Sep 25 23:20 UTC │
	│ delete  │ -p multinode-688539                                                                                                                                       │ multinode-688539     │ jenkins │ v1.36.0 │ 03 Sep 25 23:20 UTC │ 03 Sep 25 23:20 UTC │
	│ start   │ -p test-preload-600653 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4   │ test-preload-600653  │ jenkins │ v1.36.0 │ 03 Sep 25 23:20 UTC │ 03 Sep 25 23:22 UTC │
	│ image   │ test-preload-600653 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-600653  │ jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ stop    │ -p test-preload-600653                                                                                                                                    │ test-preload-600653  │ jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:22 UTC │
	│ start   │ -p test-preload-600653 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-600653  │ jenkins │ v1.36.0 │ 03 Sep 25 23:22 UTC │ 03 Sep 25 23:23 UTC │
	│ image   │ test-preload-600653 image list                                                                                                                            │ test-preload-600653  │ jenkins │ v1.36.0 │ 03 Sep 25 23:23 UTC │ 03 Sep 25 23:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:22:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:22:27.980089  144367 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:22:27.980348  144367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:22:27.980360  144367 out.go:374] Setting ErrFile to fd 2...
	I0903 23:22:27.980364  144367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:22:27.980605  144367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:22:27.981181  144367 out.go:368] Setting JSON to false
	I0903 23:22:27.982626  144367 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7492,"bootTime":1756934256,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:22:27.982698  144367 start.go:140] virtualization: kvm guest
	I0903 23:22:27.984359  144367 out.go:179] * [test-preload-600653] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:22:27.985348  144367 notify.go:220] Checking for updates...
	I0903 23:22:27.985373  144367 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:22:27.986430  144367 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:22:27.987330  144367 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:22:27.988094  144367 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:22:27.989029  144367 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:22:27.990155  144367 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:22:27.991573  144367 config.go:182] Loaded profile config "test-preload-600653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0903 23:22:27.992216  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:22:27.992276  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:22:28.007025  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0903 23:22:28.007423  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:22:28.008016  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:22:28.008054  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:22:28.008470  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:22:28.008662  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:22:28.010285  144367 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0903 23:22:28.011348  144367 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:22:28.011637  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:22:28.011670  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:22:28.026910  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0903 23:22:28.027357  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:22:28.027796  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:22:28.027826  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:22:28.028231  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:22:28.028431  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:22:28.063897  144367 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:22:28.065039  144367 start.go:304] selected driver: kvm2
	I0903 23:22:28.065056  144367 start.go:918] validating driver "kvm2" against &{Name:test-preload-600653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-600653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:22:28.065161  144367 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:22:28.065861  144367 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:22:28.065941  144367 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:22:28.081349  144367 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:22:28.081684  144367 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:22:28.081719  144367 cni.go:84] Creating CNI manager for ""
	I0903 23:22:28.081753  144367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:22:28.081797  144367 start.go:348] cluster config:
	{Name:test-preload-600653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-600653 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:22:28.081876  144367 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:22:28.083392  144367 out.go:179] * Starting "test-preload-600653" primary control-plane node in "test-preload-600653" cluster
	I0903 23:22:28.084584  144367 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0903 23:22:28.563327  144367 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0903 23:22:28.563365  144367 cache.go:58] Caching tarball of preloaded images
	I0903 23:22:28.563535  144367 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0903 23:22:28.565037  144367 out.go:179] * Downloading Kubernetes v1.24.4 preload ...
	I0903 23:22:28.566067  144367 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0903 23:22:28.664369  144367 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0903 23:22:39.315136  144367 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0903 23:22:39.315237  144367 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0903 23:22:40.282641  144367 cache.go:61] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0903 23:22:40.282798  144367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/config.json ...
	I0903 23:22:40.283063  144367 start.go:360] acquireMachinesLock for test-preload-600653: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:22:40.283156  144367 start.go:364] duration metric: took 65.517µs to acquireMachinesLock for "test-preload-600653"
	I0903 23:22:40.283178  144367 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:22:40.283186  144367 fix.go:54] fixHost starting: 
	I0903 23:22:40.283479  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:22:40.283527  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:22:40.298806  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0903 23:22:40.299230  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:22:40.299806  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:22:40.299838  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:22:40.300323  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:22:40.300509  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:22:40.300660  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetState
	I0903 23:22:40.302303  144367 fix.go:112] recreateIfNeeded on test-preload-600653: state=Stopped err=<nil>
	I0903 23:22:40.302345  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	W0903 23:22:40.302515  144367 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:22:40.304238  144367 out.go:252] * Restarting existing kvm2 VM for "test-preload-600653" ...
	I0903 23:22:40.304284  144367 main.go:141] libmachine: (test-preload-600653) Calling .Start
	I0903 23:22:40.304429  144367 main.go:141] libmachine: (test-preload-600653) starting domain...
	I0903 23:22:40.304444  144367 main.go:141] libmachine: (test-preload-600653) ensuring networks are active...
	I0903 23:22:40.305124  144367 main.go:141] libmachine: (test-preload-600653) Ensuring network default is active
	I0903 23:22:40.305404  144367 main.go:141] libmachine: (test-preload-600653) Ensuring network mk-test-preload-600653 is active
	I0903 23:22:40.305797  144367 main.go:141] libmachine: (test-preload-600653) getting domain XML...
	I0903 23:22:40.306503  144367 main.go:141] libmachine: (test-preload-600653) creating domain...
	I0903 23:22:41.504732  144367 main.go:141] libmachine: (test-preload-600653) waiting for IP...
	I0903 23:22:41.505546  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:41.505964  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:41.506058  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:41.505959  144436 retry.go:31] will retry after 277.445316ms: waiting for domain to come up
	I0903 23:22:41.785564  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:41.786076  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:41.786101  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:41.786024  144436 retry.go:31] will retry after 301.561262ms: waiting for domain to come up
	I0903 23:22:42.089604  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:42.090078  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:42.090109  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:42.090037  144436 retry.go:31] will retry after 436.783811ms: waiting for domain to come up
	I0903 23:22:42.528567  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:42.528978  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:42.529004  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:42.528937  144436 retry.go:31] will retry after 441.610598ms: waiting for domain to come up
	I0903 23:22:42.972468  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:42.972776  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:42.972807  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:42.972740  144436 retry.go:31] will retry after 694.982462ms: waiting for domain to come up
	I0903 23:22:43.669698  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:43.670135  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:43.670180  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:43.670064  144436 retry.go:31] will retry after 666.673809ms: waiting for domain to come up
	I0903 23:22:44.337804  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:44.338239  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:44.338276  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:44.338185  144436 retry.go:31] will retry after 778.880375ms: waiting for domain to come up
	I0903 23:22:45.118129  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:45.118552  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:45.118585  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:45.118518  144436 retry.go:31] will retry after 1.180960625s: waiting for domain to come up
	I0903 23:22:46.301300  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:46.301699  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:46.301730  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:46.301677  144436 retry.go:31] will retry after 1.807798633s: waiting for domain to come up
	I0903 23:22:48.111707  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:48.112077  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:48.112098  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:48.112056  144436 retry.go:31] will retry after 2.063740945s: waiting for domain to come up
	I0903 23:22:50.178254  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:50.178710  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:50.178734  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:50.178669  144436 retry.go:31] will retry after 1.870697019s: waiting for domain to come up
	I0903 23:22:52.052137  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:52.052450  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:52.052471  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:52.052414  144436 retry.go:31] will retry after 3.177818106s: waiting for domain to come up
	I0903 23:22:55.233707  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:55.234151  144367 main.go:141] libmachine: (test-preload-600653) DBG | unable to find current IP address of domain test-preload-600653 in network mk-test-preload-600653
	I0903 23:22:55.234177  144367 main.go:141] libmachine: (test-preload-600653) DBG | I0903 23:22:55.234123  144436 retry.go:31] will retry after 4.039420988s: waiting for domain to come up
	I0903 23:22:59.277736  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.278087  144367 main.go:141] libmachine: (test-preload-600653) found domain IP: 192.168.39.11
	I0903 23:22:59.278111  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has current primary IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.278117  144367 main.go:141] libmachine: (test-preload-600653) reserving static IP address...
	I0903 23:22:59.278546  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "test-preload-600653", mac: "52:54:00:4c:59:98", ip: "192.168.39.11"} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.278578  144367 main.go:141] libmachine: (test-preload-600653) reserved static IP address 192.168.39.11 for domain test-preload-600653
	I0903 23:22:59.278590  144367 main.go:141] libmachine: (test-preload-600653) DBG | skip adding static IP to network mk-test-preload-600653 - found existing host DHCP lease matching {name: "test-preload-600653", mac: "52:54:00:4c:59:98", ip: "192.168.39.11"}
	I0903 23:22:59.278597  144367 main.go:141] libmachine: (test-preload-600653) waiting for SSH...
	I0903 23:22:59.278622  144367 main.go:141] libmachine: (test-preload-600653) DBG | Getting to WaitForSSH function...
	I0903 23:22:59.280933  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.281265  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.281296  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.281440  144367 main.go:141] libmachine: (test-preload-600653) DBG | Using SSH client type: external
	I0903 23:22:59.281466  144367 main.go:141] libmachine: (test-preload-600653) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa (-rw-------)
	I0903 23:22:59.281517  144367 main.go:141] libmachine: (test-preload-600653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:22:59.281542  144367 main.go:141] libmachine: (test-preload-600653) DBG | About to run SSH command:
	I0903 23:22:59.281555  144367 main.go:141] libmachine: (test-preload-600653) DBG | exit 0
	I0903 23:22:59.409266  144367 main.go:141] libmachine: (test-preload-600653) DBG | SSH cmd err, output: <nil>: 
	I0903 23:22:59.409679  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetConfigRaw
	I0903 23:22:59.410282  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetIP
	I0903 23:22:59.412460  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.412797  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.412835  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.413098  144367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/config.json ...
	I0903 23:22:59.413292  144367 machine.go:93] provisionDockerMachine start ...
	I0903 23:22:59.413311  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:22:59.413539  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:22:59.415767  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.416107  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.416138  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.416223  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:22:59.416376  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.416494  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.416587  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:22:59.416702  144367 main.go:141] libmachine: Using SSH client type: native
	I0903 23:22:59.416985  144367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0903 23:22:59.417002  144367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:22:59.525417  144367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:22:59.525456  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetMachineName
	I0903 23:22:59.525731  144367 buildroot.go:166] provisioning hostname "test-preload-600653"
	I0903 23:22:59.525764  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetMachineName
	I0903 23:22:59.525964  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:22:59.528626  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.528967  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.528999  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.529097  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:22:59.529283  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.529490  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.529640  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:22:59.529789  144367 main.go:141] libmachine: Using SSH client type: native
	I0903 23:22:59.529980  144367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0903 23:22:59.529993  144367 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-600653 && echo "test-preload-600653" | sudo tee /etc/hostname
	I0903 23:22:59.653771  144367 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-600653
	
	I0903 23:22:59.653808  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:22:59.656583  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.656927  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.656962  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.657091  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:22:59.657273  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.657472  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:22:59.657635  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:22:59.657807  144367 main.go:141] libmachine: Using SSH client type: native
	I0903 23:22:59.658008  144367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0903 23:22:59.658023  144367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-600653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-600653/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-600653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:22:59.775667  144367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:22:59.775701  144367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:22:59.775720  144367 buildroot.go:174] setting up certificates
	I0903 23:22:59.775729  144367 provision.go:84] configureAuth start
	I0903 23:22:59.775739  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetMachineName
	I0903 23:22:59.776051  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetIP
	I0903 23:22:59.778759  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.779087  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.779119  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.779248  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:22:59.781053  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.781328  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:22:59.781361  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:22:59.781539  144367 provision.go:143] copyHostCerts
	I0903 23:22:59.781598  144367 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:22:59.781622  144367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:22:59.781726  144367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:22:59.781835  144367 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:22:59.781846  144367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:22:59.781886  144367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:22:59.781958  144367 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:22:59.781968  144367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:22:59.782001  144367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:22:59.782065  144367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.test-preload-600653 san=[127.0.0.1 192.168.39.11 localhost minikube test-preload-600653]
	I0903 23:23:00.118087  144367 provision.go:177] copyRemoteCerts
	I0903 23:23:00.118142  144367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:23:00.118166  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.120826  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.121092  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.121124  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.121244  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.121468  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.121621  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.121746  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:00.214935  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:23:00.244543  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0903 23:23:00.273780  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:23:00.301990  144367 provision.go:87] duration metric: took 526.246601ms to configureAuth
	I0903 23:23:00.302022  144367 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:23:00.302189  144367 config.go:182] Loaded profile config "test-preload-600653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0903 23:23:00.302253  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.304770  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.305029  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.305056  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.305195  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.305461  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.305628  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.305757  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.305892  144367 main.go:141] libmachine: Using SSH client type: native
	I0903 23:23:00.306091  144367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0903 23:23:00.306106  144367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:23:00.535378  144367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:23:00.535408  144367 machine.go:96] duration metric: took 1.122102482s to provisionDockerMachine
	I0903 23:23:00.535421  144367 start.go:293] postStartSetup for "test-preload-600653" (driver="kvm2")
	I0903 23:23:00.535430  144367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:23:00.535447  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:00.535794  144367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:23:00.535834  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.538255  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.538594  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.538617  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.538808  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.538983  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.539110  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.539248  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:00.625741  144367 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:23:00.629982  144367 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:23:00.630010  144367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:23:00.630084  144367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:23:00.630155  144367 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:23:00.630239  144367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:23:00.640736  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:23:00.666799  144367 start.go:296] duration metric: took 131.362965ms for postStartSetup
	I0903 23:23:00.666848  144367 fix.go:56] duration metric: took 20.383659261s for fixHost
	I0903 23:23:00.666870  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.669346  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.669645  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.669674  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.669828  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.670030  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.670208  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.670335  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.670493  144367 main.go:141] libmachine: Using SSH client type: native
	I0903 23:23:00.670697  144367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0903 23:23:00.670707  144367 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:23:00.782189  144367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756941780.758152459
	
	I0903 23:23:00.782216  144367 fix.go:216] guest clock: 1756941780.758152459
	I0903 23:23:00.782223  144367 fix.go:229] Guest: 2025-09-03 23:23:00.758152459 +0000 UTC Remote: 2025-09-03 23:23:00.666852067 +0000 UTC m=+32.722423700 (delta=91.300392ms)
	I0903 23:23:00.782245  144367 fix.go:200] guest clock delta is within tolerance: 91.300392ms
	I0903 23:23:00.782251  144367 start.go:83] releasing machines lock for "test-preload-600653", held for 20.499083066s
	I0903 23:23:00.782273  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:00.782522  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetIP
	I0903 23:23:00.785139  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.785463  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.785493  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.785602  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:00.786052  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:00.786238  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:00.786329  144367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:23:00.786372  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.786446  144367 ssh_runner.go:195] Run: cat /version.json
	I0903 23:23:00.786476  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:00.788916  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.789118  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.789323  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.789344  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.789475  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.789581  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:00.789610  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:00.789647  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.789844  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.789853  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:00.790040  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:00.790043  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:00.790150  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:00.790293  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:00.902113  144367 ssh_runner.go:195] Run: systemctl --version
	I0903 23:23:00.907925  144367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:23:01.051812  144367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:23:01.058016  144367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:23:01.058090  144367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:23:01.077126  144367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:23:01.077153  144367 start.go:495] detecting cgroup driver to use...
	I0903 23:23:01.077213  144367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:23:01.096139  144367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:23:01.111827  144367 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:23:01.111883  144367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:23:01.126714  144367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:23:01.142026  144367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:23:01.276924  144367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:23:01.422060  144367 docker.go:234] disabling docker service ...
	I0903 23:23:01.422159  144367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:23:01.438584  144367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:23:01.452823  144367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:23:01.666040  144367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:23:01.800149  144367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:23:01.814459  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:23:01.834749  144367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0903 23:23:01.834815  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.845675  144367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:23:01.845745  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.857138  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.868317  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.879324  144367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:23:01.890741  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.901837  144367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.920115  144367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:23:01.930992  144367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:23:01.940019  144367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:23:01.940074  144367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:23:01.957116  144367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:23:01.967449  144367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:23:02.094485  144367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:23:02.192789  144367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:23:02.192893  144367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:23:02.198266  144367 start.go:563] Will wait 60s for crictl version
	I0903 23:23:02.198322  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:02.202115  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:23:02.242488  144367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:23:02.242583  144367 ssh_runner.go:195] Run: crio --version
	I0903 23:23:02.269724  144367 ssh_runner.go:195] Run: crio --version
	I0903 23:23:02.299169  144367 out.go:179] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0903 23:23:02.300308  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetIP
	I0903 23:23:02.302527  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:02.302874  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:02.302898  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:02.303093  144367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:23:02.307143  144367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:23:02.321269  144367 kubeadm.go:875] updating cluster {Name:test-preload-600653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-600653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:23:02.321425  144367 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0903 23:23:02.321497  144367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:23:02.357500  144367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0903 23:23:02.357586  144367 ssh_runner.go:195] Run: which lz4
	I0903 23:23:02.361551  144367 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:23:02.365872  144367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:23:02.365902  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0903 23:23:03.888836  144367 crio.go:462] duration metric: took 1.527309943s to copy over tarball
	I0903 23:23:03.888928  144367 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:23:05.707988  144367 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.819022472s)
	I0903 23:23:05.708018  144367 crio.go:469] duration metric: took 1.819150829s to extract the tarball
	I0903 23:23:05.708026  144367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:23:05.750976  144367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:23:05.789280  144367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0903 23:23:05.789317  144367 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:23:05.789405  144367 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:23:05.789439  144367 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:05.789462  144367 image.go:138] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:05.789430  144367 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:05.789443  144367 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:05.789491  144367 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:05.789503  144367 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:05.789468  144367 image.go:138] retrieving image: registry.k8s.io/pause:3.7
	I0903 23:23:05.791057  144367 image.go:181] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0903 23:23:05.791074  144367 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:05.791088  144367 image.go:181] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:05.791080  144367 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:05.791064  144367 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:05.791078  144367 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:05.791174  144367 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:05.791130  144367 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:23:05.961742  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:05.964952  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0903 23:23:05.967564  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:05.968791  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:05.969553  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:05.987018  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:06.034196  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:06.082826  144367 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0903 23:23:06.082892  144367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:06.082843  144367 cache_images.go:117] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0903 23:23:06.082990  144367 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0903 23:23:06.083040  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.082952  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.102268  144367 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0903 23:23:06.102330  144367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:06.102381  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.116968  144367 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0903 23:23:06.116983  144367 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0903 23:23:06.117008  144367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:06.117021  144367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:06.117053  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.117063  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.124171  144367 cache_images.go:117] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0903 23:23:06.124210  144367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:06.124250  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.139409  144367 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0903 23:23:06.139460  144367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:06.139459  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:06.139497  144367 ssh_runner.go:195] Run: which crictl
	I0903 23:23:06.139514  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0903 23:23:06.139532  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:06.139563  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:06.139620  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:06.139622  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:06.258471  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:06.258631  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:06.258636  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:06.258694  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:06.268129  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0903 23:23:06.268207  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:06.268250  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:06.393453  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:06.393529  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0903 23:23:06.395523  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0903 23:23:06.395529  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0903 23:23:06.395596  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0903 23:23:06.404255  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0903 23:23:06.404268  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0903 23:23:06.494558  144367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0903 23:23:06.534916  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0903 23:23:06.535033  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0903 23:23:06.535097  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0903 23:23:06.535137  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0903 23:23:06.535145  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0903 23:23:06.535179  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0903 23:23:06.535040  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0903 23:23:06.535186  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0903 23:23:06.535213  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0903 23:23:06.535235  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0903 23:23:06.535254  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0903 23:23:06.535308  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0903 23:23:06.573479  144367 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0903 23:23:06.573549  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0903 23:23:06.573579  144367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0903 23:23:06.573605  144367 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0903 23:23:06.573609  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0903 23:23:06.573627  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0903 23:23:06.573549  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0903 23:23:06.573651  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0903 23:23:06.573673  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0903 23:23:06.573697  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0903 23:23:06.578788  144367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0903 23:23:07.057191  144367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:23:09.726168  144367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.152511694s)
	I0903 23:23:09.726211  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0903 23:23:09.726238  144367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.66900808s)
	I0903 23:23:09.726244  144367 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0903 23:23:09.726330  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0903 23:23:09.866324  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0903 23:23:09.866389  144367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0903 23:23:09.866454  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0903 23:23:11.910431  144367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.043945187s)
	I0903 23:23:11.910482  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0903 23:23:11.910519  144367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0903 23:23:11.910579  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0903 23:23:12.352279  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0903 23:23:12.352328  144367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0903 23:23:12.352376  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0903 23:23:13.097594  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0903 23:23:13.097648  144367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0903 23:23:13.097698  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0903 23:23:13.438388  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0903 23:23:13.438443  144367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0903 23:23:13.438495  144367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0903 23:23:14.277938  144367 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0903 23:23:14.277984  144367 cache_images.go:124] Successfully loaded all cached images
	I0903 23:23:14.277991  144367 cache_images.go:93] duration metric: took 8.488660198s to LoadCachedImages
	I0903 23:23:14.278006  144367 kubeadm.go:926] updating node { 192.168.39.11 8443 v1.24.4 crio true true} ...
	I0903 23:23:14.278123  144367 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-600653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-600653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:23:14.278219  144367 ssh_runner.go:195] Run: crio config
	I0903 23:23:14.325651  144367 cni.go:84] Creating CNI manager for ""
	I0903 23:23:14.325674  144367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:23:14.325684  144367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:23:14.325701  144367 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-600653 NodeName:test-preload-600653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:23:14.325850  144367 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-600653"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:23:14.325924  144367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0903 23:23:14.337540  144367 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:23:14.337625  144367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:23:14.348105  144367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0903 23:23:14.366235  144367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:23:14.384485  144367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0903 23:23:14.403140  144367 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0903 23:23:14.406929  144367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:23:14.419618  144367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:23:14.549829  144367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:23:14.586820  144367 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653 for IP: 192.168.39.11
	I0903 23:23:14.586847  144367 certs.go:194] generating shared ca certs ...
	I0903 23:23:14.586864  144367 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:23:14.587019  144367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:23:14.587061  144367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:23:14.587070  144367 certs.go:256] generating profile certs ...
	I0903 23:23:14.587143  144367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.key
	I0903 23:23:14.587198  144367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/apiserver.key.a6086c6d
	I0903 23:23:14.587232  144367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/proxy-client.key
	I0903 23:23:14.587357  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:23:14.587386  144367 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:23:14.587393  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:23:14.587415  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:23:14.587435  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:23:14.587457  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:23:14.587493  144367 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:23:14.588069  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:23:14.618498  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:23:14.646048  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:23:14.685902  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:23:14.711851  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0903 23:23:14.737621  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:23:14.763758  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:23:14.789940  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:23:14.815981  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:23:14.841987  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:23:14.868097  144367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:23:14.894538  144367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:23:14.913596  144367 ssh_runner.go:195] Run: openssl version
	I0903 23:23:14.919752  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:23:14.931604  144367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:23:14.936249  144367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:23:14.936300  144367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:23:14.942777  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:23:14.954178  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:23:14.966123  144367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:23:14.970774  144367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:23:14.970830  144367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:23:14.977439  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:23:14.988844  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:23:15.000481  144367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:23:15.005018  144367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:23:15.005074  144367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:23:15.011531  144367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:23:15.022957  144367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:23:15.027569  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:23:15.034580  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:23:15.041046  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:23:15.047703  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:23:15.054278  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:23:15.060815  144367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:23:15.067549  144367 kubeadm.go:392] StartCluster: {Name:test-preload-600653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-600653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:23:15.067649  144367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:23:15.067705  144367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:23:15.103167  144367 cri.go:89] found id: ""
	I0903 23:23:15.103250  144367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:23:15.114558  144367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:23:15.114582  144367 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:23:15.114635  144367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:23:15.125746  144367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:23:15.126239  144367 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-600653" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:23:15.126398  144367 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-600653" cluster setting kubeconfig missing "test-preload-600653" context setting]
	I0903 23:23:15.126788  144367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:23:15.127436  144367 kapi.go:59] client config for test-preload-600653: &rest.Config{Host:"https://192.168.39.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.crt", KeyFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.key", CAFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x259d6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 23:23:15.127919  144367 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0903 23:23:15.127938  144367 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0903 23:23:15.127945  144367 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0903 23:23:15.127951  144367 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0903 23:23:15.127956  144367 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0903 23:23:15.128363  144367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:23:15.138647  144367 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.11
	I0903 23:23:15.138684  144367 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:23:15.138699  144367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:23:15.138751  144367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:23:15.173039  144367 cri.go:89] found id: ""
	I0903 23:23:15.173115  144367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:23:15.190176  144367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:23:15.201750  144367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:23:15.201778  144367 kubeadm.go:157] found existing configuration files:
	
	I0903 23:23:15.201833  144367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:23:15.211862  144367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:23:15.211933  144367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:23:15.222575  144367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:23:15.232117  144367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:23:15.232182  144367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:23:15.242665  144367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:23:15.252131  144367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:23:15.252190  144367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:23:15.263220  144367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:23:15.274688  144367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:23:15.274734  144367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:23:15.286745  144367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:23:15.299128  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:15.354147  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:16.409587  144367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.055400571s)
	I0903 23:23:16.409631  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:16.688316  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:16.752889  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:16.838951  144367 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:23:16.839056  144367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:23:17.339411  144367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:23:17.839681  144367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:23:17.873188  144367 api_server.go:72] duration metric: took 1.034234213s to wait for apiserver process to appear ...
	I0903 23:23:17.873229  144367 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:23:17.873267  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:17.873840  144367 api_server.go:269] stopped: https://192.168.39.11:8443/healthz: Get "https://192.168.39.11:8443/healthz": dial tcp 192.168.39.11:8443: connect: connection refused
	I0903 23:23:18.373561  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:21.295484  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:23:21.295522  144367 api_server.go:103] status: https://192.168.39.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:23:21.295548  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:21.314430  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:23:21.314465  144367 api_server.go:103] status: https://192.168.39.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:23:21.373807  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:21.393400  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:23:21.393430  144367 api_server.go:103] status: https://192.168.39.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:23:21.873799  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:21.879190  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:23:21.879216  144367 api_server.go:103] status: https://192.168.39.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:23:22.373848  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:22.379552  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:23:22.379578  144367 api_server.go:103] status: https://192.168.39.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:23:22.874174  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:22.879224  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0903 23:23:22.885835  144367 api_server.go:141] control plane version: v1.24.4
	I0903 23:23:22.885861  144367 api_server.go:131] duration metric: took 5.012613634s to wait for apiserver health ...
	I0903 23:23:22.885870  144367 cni.go:84] Creating CNI manager for ""
	I0903 23:23:22.885878  144367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:23:22.887516  144367 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 23:23:22.888498  144367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 23:23:22.906430  144367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 23:23:22.932835  144367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:23:22.939629  144367 system_pods.go:59] 7 kube-system pods found
	I0903 23:23:22.939659  144367 system_pods.go:61] "coredns-6d4b75cb6d-wphgl" [32eab7b4-a2f4-46b5-b345-cf864edce160] Running
	I0903 23:23:22.939668  144367 system_pods.go:61] "etcd-test-preload-600653" [574e2cb7-fd51-4181-9856-a2ee90898c03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:23:22.939674  144367 system_pods.go:61] "kube-apiserver-test-preload-600653" [99250bfe-723f-4fca-8ab9-7ea4f3a44b44] Running
	I0903 23:23:22.939680  144367 system_pods.go:61] "kube-controller-manager-test-preload-600653" [b6420f31-0c0e-4f48-98b7-6f3e60ec89bc] Running
	I0903 23:23:22.939683  144367 system_pods.go:61] "kube-proxy-kzg7w" [f286793c-3cd3-4f54-b061-76a18ad9cf39] Running
	I0903 23:23:22.939687  144367 system_pods.go:61] "kube-scheduler-test-preload-600653" [51c99a2e-9e65-43df-9018-66b2e2bf4b08] Running
	I0903 23:23:22.939691  144367 system_pods.go:61] "storage-provisioner" [ec348371-2370-47a9-af61-16853b146032] Running
	I0903 23:23:22.939701  144367 system_pods.go:74] duration metric: took 6.839638ms to wait for pod list to return data ...
	I0903 23:23:22.939711  144367 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:23:22.943256  144367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:23:22.943285  144367 node_conditions.go:123] node cpu capacity is 2
	I0903 23:23:22.943299  144367 node_conditions.go:105] duration metric: took 3.580997ms to run NodePressure ...
	I0903 23:23:22.943332  144367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:23:23.142586  144367 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0903 23:23:23.145418  144367 kubeadm.go:735] kubelet initialised
	I0903 23:23:23.145439  144367 kubeadm.go:736] duration metric: took 2.822027ms waiting for restarted kubelet to initialise ...
	I0903 23:23:23.145458  144367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:23:23.162849  144367 ops.go:34] apiserver oom_adj: -16
	I0903 23:23:23.162870  144367 kubeadm.go:593] duration metric: took 8.048281462s to restartPrimaryControlPlane
	I0903 23:23:23.162877  144367 kubeadm.go:394] duration metric: took 8.095338941s to StartCluster
	I0903 23:23:23.162895  144367 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:23:23.162962  144367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:23:23.163579  144367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:23:23.163810  144367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:23:23.163895  144367 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 23:23:23.164006  144367 addons.go:69] Setting storage-provisioner=true in profile "test-preload-600653"
	I0903 23:23:23.164029  144367 addons.go:69] Setting default-storageclass=true in profile "test-preload-600653"
	I0903 23:23:23.164048  144367 addons.go:238] Setting addon storage-provisioner=true in "test-preload-600653"
	I0903 23:23:23.164056  144367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-600653"
	I0903 23:23:23.164066  144367 config.go:182] Loaded profile config "test-preload-600653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	W0903 23:23:23.164056  144367 addons.go:247] addon storage-provisioner should already be in state true
	I0903 23:23:23.164146  144367 host.go:66] Checking if "test-preload-600653" exists ...
	I0903 23:23:23.164466  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:23:23.164489  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:23:23.164509  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:23:23.164518  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:23:23.165231  144367 out.go:179] * Verifying Kubernetes components...
	I0903 23:23:23.166501  144367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:23:23.179871  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0903 23:23:23.179930  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0903 23:23:23.180315  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:23:23.180447  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:23:23.180771  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:23:23.180789  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:23:23.180913  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:23:23.180936  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:23:23.181156  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:23:23.181220  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:23:23.181381  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetState
	I0903 23:23:23.181756  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:23:23.181807  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:23:23.183447  144367 kapi.go:59] client config for test-preload-600653: &rest.Config{Host:"https://192.168.39.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.crt", KeyFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.key", CAFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x259d6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 23:23:23.183694  144367 addons.go:238] Setting addon default-storageclass=true in "test-preload-600653"
	W0903 23:23:23.183707  144367 addons.go:247] addon default-storageclass should already be in state true
	I0903 23:23:23.183735  144367 host.go:66] Checking if "test-preload-600653" exists ...
	I0903 23:23:23.183967  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:23:23.184003  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:23:23.196752  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0903 23:23:23.197337  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:23:23.197941  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:23:23.197968  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:23:23.198198  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0903 23:23:23.198350  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:23:23.198542  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetState
	I0903 23:23:23.198713  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:23:23.199359  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:23:23.199385  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:23:23.199754  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:23:23.200138  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:23.200342  144367 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:23:23.200387  144367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:23:23.202031  144367 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:23:23.203185  144367 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:23:23.203221  144367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:23:23.203249  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:23.205793  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:23.206092  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:23.206123  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:23.206293  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:23.206464  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:23.206626  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:23.206761  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:23.237128  144367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0903 23:23:23.237633  144367 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:23:23.238166  144367 main.go:141] libmachine: Using API Version  1
	I0903 23:23:23.238193  144367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:23:23.238582  144367 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:23:23.238804  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetState
	I0903 23:23:23.240429  144367 main.go:141] libmachine: (test-preload-600653) Calling .DriverName
	I0903 23:23:23.240669  144367 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:23:23.240689  144367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:23:23.240717  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHHostname
	I0903 23:23:23.243878  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:23.243916  144367 main.go:141] libmachine: (test-preload-600653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:59:98", ip: ""} in network mk-test-preload-600653: {Iface:virbr1 ExpiryTime:2025-09-04 00:22:51 +0000 UTC Type:0 Mac:52:54:00:4c:59:98 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:test-preload-600653 Clientid:01:52:54:00:4c:59:98}
	I0903 23:23:23.243966  144367 main.go:141] libmachine: (test-preload-600653) DBG | domain test-preload-600653 has defined IP address 192.168.39.11 and MAC address 52:54:00:4c:59:98 in network mk-test-preload-600653
	I0903 23:23:23.244067  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHPort
	I0903 23:23:23.244233  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHKeyPath
	I0903 23:23:23.244406  144367 main.go:141] libmachine: (test-preload-600653) Calling .GetSSHUsername
	I0903 23:23:23.244536  144367 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/test-preload-600653/id_rsa Username:docker}
	I0903 23:23:23.377865  144367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:23:23.399242  144367 node_ready.go:35] waiting up to 6m0s for node "test-preload-600653" to be "Ready" ...
	I0903 23:23:23.476127  144367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:23:23.581242  144367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:23:24.386110  144367 main.go:141] libmachine: Making call to close driver server
	I0903 23:23:24.386145  144367 main.go:141] libmachine: (test-preload-600653) Calling .Close
	I0903 23:23:24.386476  144367 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:23:24.386499  144367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:23:24.386508  144367 main.go:141] libmachine: Making call to close driver server
	I0903 23:23:24.386517  144367 main.go:141] libmachine: (test-preload-600653) Calling .Close
	I0903 23:23:24.386798  144367 main.go:141] libmachine: (test-preload-600653) DBG | Closing plugin on server side
	I0903 23:23:24.386830  144367 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:23:24.386846  144367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:23:24.392721  144367 main.go:141] libmachine: Making call to close driver server
	I0903 23:23:24.392739  144367 main.go:141] libmachine: (test-preload-600653) Calling .Close
	I0903 23:23:24.393035  144367 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:23:24.393057  144367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:23:24.393056  144367 main.go:141] libmachine: (test-preload-600653) DBG | Closing plugin on server side
	I0903 23:23:24.396815  144367 main.go:141] libmachine: Making call to close driver server
	I0903 23:23:24.396831  144367 main.go:141] libmachine: (test-preload-600653) Calling .Close
	I0903 23:23:24.397104  144367 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:23:24.397123  144367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:23:24.397133  144367 main.go:141] libmachine: Making call to close driver server
	I0903 23:23:24.397142  144367 main.go:141] libmachine: (test-preload-600653) Calling .Close
	I0903 23:23:24.397330  144367 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:23:24.397347  144367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:23:24.397353  144367 main.go:141] libmachine: (test-preload-600653) DBG | Closing plugin on server side
	I0903 23:23:24.399025  144367 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0903 23:23:24.399999  144367 addons.go:514] duration metric: took 1.236118941s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0903 23:23:25.402400  144367 node_ready.go:57] node "test-preload-600653" has "Ready":"False" status (will retry)
	W0903 23:23:27.403147  144367 node_ready.go:57] node "test-preload-600653" has "Ready":"False" status (will retry)
	W0903 23:23:29.906022  144367 node_ready.go:57] node "test-preload-600653" has "Ready":"False" status (will retry)
	I0903 23:23:31.902856  144367 node_ready.go:49] node "test-preload-600653" is "Ready"
	I0903 23:23:31.902899  144367 node_ready.go:38] duration metric: took 8.503612334s for node "test-preload-600653" to be "Ready" ...
	I0903 23:23:31.902919  144367 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:23:31.902995  144367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:23:31.921953  144367 api_server.go:72] duration metric: took 8.758110786s to wait for apiserver process to appear ...
	I0903 23:23:31.921989  144367 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:23:31.922014  144367 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0903 23:23:31.927662  144367 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0903 23:23:31.928614  144367 api_server.go:141] control plane version: v1.24.4
	I0903 23:23:31.928638  144367 api_server.go:131] duration metric: took 6.639696ms to wait for apiserver health ...
	I0903 23:23:31.928649  144367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:23:31.934956  144367 system_pods.go:59] 7 kube-system pods found
	I0903 23:23:31.934985  144367 system_pods.go:61] "coredns-6d4b75cb6d-wphgl" [32eab7b4-a2f4-46b5-b345-cf864edce160] Running
	I0903 23:23:31.934996  144367 system_pods.go:61] "etcd-test-preload-600653" [574e2cb7-fd51-4181-9856-a2ee90898c03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:23:31.935006  144367 system_pods.go:61] "kube-apiserver-test-preload-600653" [99250bfe-723f-4fca-8ab9-7ea4f3a44b44] Running
	I0903 23:23:31.935018  144367 system_pods.go:61] "kube-controller-manager-test-preload-600653" [b6420f31-0c0e-4f48-98b7-6f3e60ec89bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:23:31.935023  144367 system_pods.go:61] "kube-proxy-kzg7w" [f286793c-3cd3-4f54-b061-76a18ad9cf39] Running
	I0903 23:23:31.935039  144367 system_pods.go:61] "kube-scheduler-test-preload-600653" [51c99a2e-9e65-43df-9018-66b2e2bf4b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:23:31.935047  144367 system_pods.go:61] "storage-provisioner" [ec348371-2370-47a9-af61-16853b146032] Running
	I0903 23:23:31.935055  144367 system_pods.go:74] duration metric: took 6.396952ms to wait for pod list to return data ...
	I0903 23:23:31.935067  144367 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:23:31.939424  144367 default_sa.go:45] found service account: "default"
	I0903 23:23:31.939448  144367 default_sa.go:55] duration metric: took 4.370182ms for default service account to be created ...
	I0903 23:23:31.939459  144367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:23:31.942863  144367 system_pods.go:86] 7 kube-system pods found
	I0903 23:23:31.942885  144367 system_pods.go:89] "coredns-6d4b75cb6d-wphgl" [32eab7b4-a2f4-46b5-b345-cf864edce160] Running
	I0903 23:23:31.942897  144367 system_pods.go:89] "etcd-test-preload-600653" [574e2cb7-fd51-4181-9856-a2ee90898c03] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:23:31.942903  144367 system_pods.go:89] "kube-apiserver-test-preload-600653" [99250bfe-723f-4fca-8ab9-7ea4f3a44b44] Running
	I0903 23:23:31.942915  144367 system_pods.go:89] "kube-controller-manager-test-preload-600653" [b6420f31-0c0e-4f48-98b7-6f3e60ec89bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:23:31.942923  144367 system_pods.go:89] "kube-proxy-kzg7w" [f286793c-3cd3-4f54-b061-76a18ad9cf39] Running
	I0903 23:23:31.942929  144367 system_pods.go:89] "kube-scheduler-test-preload-600653" [51c99a2e-9e65-43df-9018-66b2e2bf4b08] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:23:31.942933  144367 system_pods.go:89] "storage-provisioner" [ec348371-2370-47a9-af61-16853b146032] Running
	I0903 23:23:31.942940  144367 system_pods.go:126] duration metric: took 3.475427ms to wait for k8s-apps to be running ...
	I0903 23:23:31.942954  144367 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:23:31.942996  144367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:23:31.958647  144367 system_svc.go:56] duration metric: took 15.677569ms WaitForService to wait for kubelet
	I0903 23:23:31.958682  144367 kubeadm.go:578] duration metric: took 8.794845156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:23:31.958707  144367 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:23:31.961663  144367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:23:31.961681  144367 node_conditions.go:123] node cpu capacity is 2
	I0903 23:23:31.961695  144367 node_conditions.go:105] duration metric: took 2.9827ms to run NodePressure ...
	I0903 23:23:31.961706  144367 start.go:241] waiting for startup goroutines ...
	I0903 23:23:31.961715  144367 start.go:246] waiting for cluster config update ...
	I0903 23:23:31.961727  144367 start.go:255] writing updated cluster config ...
	I0903 23:23:31.961978  144367 ssh_runner.go:195] Run: rm -f paused
	I0903 23:23:31.966584  144367 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:23:31.967104  144367 kapi.go:59] client config for test-preload-600653: &rest.Config{Host:"https://192.168.39.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.crt", KeyFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/profiles/test-preload-600653/client.key", CAFile:"/home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x259d6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0903 23:23:31.970333  144367 pod_ready.go:83] waiting for pod "coredns-6d4b75cb6d-wphgl" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:31.974679  144367 pod_ready.go:94] pod "coredns-6d4b75cb6d-wphgl" is "Ready"
	I0903 23:23:31.974709  144367 pod_ready.go:86] duration metric: took 4.35279ms for pod "coredns-6d4b75cb6d-wphgl" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:31.977494  144367 pod_ready.go:83] waiting for pod "etcd-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:32.984136  144367 pod_ready.go:94] pod "etcd-test-preload-600653" is "Ready"
	I0903 23:23:32.984165  144367 pod_ready.go:86] duration metric: took 1.006655397s for pod "etcd-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:32.987449  144367 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:32.990918  144367 pod_ready.go:94] pod "kube-apiserver-test-preload-600653" is "Ready"
	I0903 23:23:32.990941  144367 pod_ready.go:86] duration metric: took 3.467517ms for pod "kube-apiserver-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:32.993507  144367 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:33.170890  144367 pod_ready.go:94] pod "kube-controller-manager-test-preload-600653" is "Ready"
	I0903 23:23:33.170917  144367 pod_ready.go:86] duration metric: took 177.392501ms for pod "kube-controller-manager-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:33.372087  144367 pod_ready.go:83] waiting for pod "kube-proxy-kzg7w" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:33.770528  144367 pod_ready.go:94] pod "kube-proxy-kzg7w" is "Ready"
	I0903 23:23:33.770571  144367 pod_ready.go:86] duration metric: took 398.449967ms for pod "kube-proxy-kzg7w" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:33.971148  144367 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:34.370764  144367 pod_ready.go:94] pod "kube-scheduler-test-preload-600653" is "Ready"
	I0903 23:23:34.370801  144367 pod_ready.go:86] duration metric: took 399.624237ms for pod "kube-scheduler-test-preload-600653" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:23:34.370815  144367 pod_ready.go:40] duration metric: took 2.404194789s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:23:34.412159  144367 start.go:617] kubectl: 1.33.2, cluster: 1.24.4 (minor skew: 9)
	I0903 23:23:34.413915  144367 out.go:203] 
	W0903 23:23:34.415120  144367 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0903 23:23:34.416196  144367 out.go:179]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0903 23:23:34.417298  144367 out.go:179] * Done! kubectl is now configured to use "test-preload-600653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.347442676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756941815347418721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba9e63e0-2665-49e5-93fa-7ebb6a512f42 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.347898195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b74e68d4-06ca-4207-820b-37a17d69038a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.347950246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b74e68d4-06ca-4207-820b-37a17d69038a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.348165536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0b2ed645cebd48e8edf4c92ebb1b3d751cffe018c326b062535af463fa83de9,PodSandboxId:9aca7b221675cc7d578f445fe49b116dc4940c60f946739b333e56afd54ac507,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1756941809891890970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wphgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32eab7b4-a2f4-46b5-b345-cf864edce160,},Annotations:map[string]string{io.kubernetes.container.hash: cb465551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bab83ac43c035ea3151b135eccaa6309dca3521704537900c4302f06560fd79d,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756941802958592781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ec348371-2370-47a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed62382914a8c5c61093c9cc05e95df7be1be41079de1b6a06f9a379733a3253,PodSandboxId:a78b8e684f429ca69007d14c4ea7b61edc282ed31924f241b6d3ce1755cc0920,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1756941802574887318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzg7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
86793c-3cd3-4f54-b061-76a18ad9cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd9f85c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1756941802551197644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec348371-2370-4
7a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ddfb86165c3ff5ab269a5ca634a638b8c966b44e606038961aad05d27be5f98,PodSandboxId:69b4b777b6d8e237238cabc181c9ad63acde9e49b699b52d238797883e1b698a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1756941797635977547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4bb0730ebafbc61ebca421e00d2bb8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f56980,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d7eba01f57dc86891d25cc88650e8a9f69084d61d14d1e4d04edf9c7259389,PodSandboxId:2e8d163aceb4b0ab8eba1fe34b5530a1908bde312d0d992fa26cbd011b44644c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1756941797656304574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799bf2d822f973faca8ad188bbb27b1e,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d831580eed8259f62d2e8a00d3febdac336e544f8d15a22bb9945cc6dc22d,PodSandboxId:87c0d92950a69038dc405fc3a9a1f6f52ef2cd05aa0db3a55d7b2cd519f82ab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1756941797586474361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da274b8a208b28238ff49cabc6ea86f7,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d9ba21aeb9e61b642165dea1fdd212f77b6e275739db9071030809dd11cd59,PodSandboxId:2c68110925288275611d4d5a41bc268f71772fc7148231b6af380e6a6ef8f8ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1756941797584515439,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d15ac08c376ef7822b340dd6f5bd45e6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: dd03708f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b74e68d4-06ca-4207-820b-37a17d69038a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.384715687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2145204-a535-4e7f-bd00-c74008b3737f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.384801687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2145204-a535-4e7f-bd00-c74008b3737f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.386362330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89cd3d14-02a5-48c9-b3eb-11e94e2a8a9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.386792189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756941815386769522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89cd3d14-02a5-48c9-b3eb-11e94e2a8a9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.387408112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1442696f-643b-44e9-a4bb-8d58b8d8989a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.387461750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1442696f-643b-44e9-a4bb-8d58b8d8989a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.387617460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0b2ed645cebd48e8edf4c92ebb1b3d751cffe018c326b062535af463fa83de9,PodSandboxId:9aca7b221675cc7d578f445fe49b116dc4940c60f946739b333e56afd54ac507,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1756941809891890970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wphgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32eab7b4-a2f4-46b5-b345-cf864edce160,},Annotations:map[string]string{io.kubernetes.container.hash: cb465551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bab83ac43c035ea3151b135eccaa6309dca3521704537900c4302f06560fd79d,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756941802958592781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ec348371-2370-47a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed62382914a8c5c61093c9cc05e95df7be1be41079de1b6a06f9a379733a3253,PodSandboxId:a78b8e684f429ca69007d14c4ea7b61edc282ed31924f241b6d3ce1755cc0920,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1756941802574887318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzg7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
86793c-3cd3-4f54-b061-76a18ad9cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd9f85c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1756941802551197644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec348371-2370-4
7a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ddfb86165c3ff5ab269a5ca634a638b8c966b44e606038961aad05d27be5f98,PodSandboxId:69b4b777b6d8e237238cabc181c9ad63acde9e49b699b52d238797883e1b698a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1756941797635977547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4bb0730ebafbc61ebca421e00d2bb8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f56980,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d7eba01f57dc86891d25cc88650e8a9f69084d61d14d1e4d04edf9c7259389,PodSandboxId:2e8d163aceb4b0ab8eba1fe34b5530a1908bde312d0d992fa26cbd011b44644c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1756941797656304574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799bf2d822f973faca8ad188bbb27b1e,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d831580eed8259f62d2e8a00d3febdac336e544f8d15a22bb9945cc6dc22d,PodSandboxId:87c0d92950a69038dc405fc3a9a1f6f52ef2cd05aa0db3a55d7b2cd519f82ab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1756941797586474361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da274b8a208b28238ff49cabc6ea86f7,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d9ba21aeb9e61b642165dea1fdd212f77b6e275739db9071030809dd11cd59,PodSandboxId:2c68110925288275611d4d5a41bc268f71772fc7148231b6af380e6a6ef8f8ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1756941797584515439,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d15ac08c376ef7822b340dd6f5bd45e6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: dd03708f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1442696f-643b-44e9-a4bb-8d58b8d8989a name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.423354847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdd17393-bc89-4ad3-9509-f8ac1608b68c name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.423446128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdd17393-bc89-4ad3-9509-f8ac1608b68c name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.424713610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ebc75c9-4ad1-4a6a-b4d4-55dd090d9ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.425288160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756941815425249181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ebc75c9-4ad1-4a6a-b4d4-55dd090d9ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.425990419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2c4e4f8-22ce-4b61-8f2f-98e0054d2310 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.426043219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2c4e4f8-22ce-4b61-8f2f-98e0054d2310 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.426244672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0b2ed645cebd48e8edf4c92ebb1b3d751cffe018c326b062535af463fa83de9,PodSandboxId:9aca7b221675cc7d578f445fe49b116dc4940c60f946739b333e56afd54ac507,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1756941809891890970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wphgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32eab7b4-a2f4-46b5-b345-cf864edce160,},Annotations:map[string]string{io.kubernetes.container.hash: cb465551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bab83ac43c035ea3151b135eccaa6309dca3521704537900c4302f06560fd79d,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756941802958592781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ec348371-2370-47a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed62382914a8c5c61093c9cc05e95df7be1be41079de1b6a06f9a379733a3253,PodSandboxId:a78b8e684f429ca69007d14c4ea7b61edc282ed31924f241b6d3ce1755cc0920,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1756941802574887318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzg7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
86793c-3cd3-4f54-b061-76a18ad9cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd9f85c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1756941802551197644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec348371-2370-4
7a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ddfb86165c3ff5ab269a5ca634a638b8c966b44e606038961aad05d27be5f98,PodSandboxId:69b4b777b6d8e237238cabc181c9ad63acde9e49b699b52d238797883e1b698a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1756941797635977547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4bb0730ebafbc61ebca421e00d2bb8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f56980,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d7eba01f57dc86891d25cc88650e8a9f69084d61d14d1e4d04edf9c7259389,PodSandboxId:2e8d163aceb4b0ab8eba1fe34b5530a1908bde312d0d992fa26cbd011b44644c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1756941797656304574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799bf2d822f973faca8ad188bbb27b1e,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d831580eed8259f62d2e8a00d3febdac336e544f8d15a22bb9945cc6dc22d,PodSandboxId:87c0d92950a69038dc405fc3a9a1f6f52ef2cd05aa0db3a55d7b2cd519f82ab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1756941797586474361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da274b8a208b28238ff49cabc6ea86f7,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d9ba21aeb9e61b642165dea1fdd212f77b6e275739db9071030809dd11cd59,PodSandboxId:2c68110925288275611d4d5a41bc268f71772fc7148231b6af380e6a6ef8f8ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1756941797584515439,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d15ac08c376ef7822b340dd6f5bd45e6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: dd03708f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2c4e4f8-22ce-4b61-8f2f-98e0054d2310 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.457876463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df037ad3-5bc9-433a-8199-425a3c06a6fd name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.458119099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df037ad3-5bc9-433a-8199-425a3c06a6fd name=/runtime.v1.RuntimeService/Version
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.459676038Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07524a52-a244-4feb-88f4-311694224378 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.460574695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756941815460547522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07524a52-a244-4feb-88f4-311694224378 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.463561275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d52ebd9b-f835-4623-b2ca-b9959cdd6d5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.463679260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d52ebd9b-f835-4623-b2ca-b9959cdd6d5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:23:35 test-preload-600653 crio[846]: time="2025-09-03 23:23:35.464110399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0b2ed645cebd48e8edf4c92ebb1b3d751cffe018c326b062535af463fa83de9,PodSandboxId:9aca7b221675cc7d578f445fe49b116dc4940c60f946739b333e56afd54ac507,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1756941809891890970,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wphgl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32eab7b4-a2f4-46b5-b345-cf864edce160,},Annotations:map[string]string{io.kubernetes.container.hash: cb465551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bab83ac43c035ea3151b135eccaa6309dca3521704537900c4302f06560fd79d,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1756941802958592781,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ec348371-2370-47a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed62382914a8c5c61093c9cc05e95df7be1be41079de1b6a06f9a379733a3253,PodSandboxId:a78b8e684f429ca69007d14c4ea7b61edc282ed31924f241b6d3ce1755cc0920,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1756941802574887318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kzg7w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2
86793c-3cd3-4f54-b061-76a18ad9cf39,},Annotations:map[string]string{io.kubernetes.container.hash: 9fd9f85c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3,PodSandboxId:6ea316102fadbd52f64c464d7a892b21f794946f0467d63ecfd245140419bd13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1756941802551197644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec348371-2370-4
7a9-af61-16853b146032,},Annotations:map[string]string{io.kubernetes.container.hash: 46e821ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ddfb86165c3ff5ab269a5ca634a638b8c966b44e606038961aad05d27be5f98,PodSandboxId:69b4b777b6d8e237238cabc181c9ad63acde9e49b699b52d238797883e1b698a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1756941797635977547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4bb0730ebafbc61ebca421e00d2bb8,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f56980,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67d7eba01f57dc86891d25cc88650e8a9f69084d61d14d1e4d04edf9c7259389,PodSandboxId:2e8d163aceb4b0ab8eba1fe34b5530a1908bde312d0d992fa26cbd011b44644c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1756941797656304574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 799bf2d822f973faca8ad188bbb27b1e,},Annotations:map[string]string{
io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:063d831580eed8259f62d2e8a00d3febdac336e544f8d15a22bb9945cc6dc22d,PodSandboxId:87c0d92950a69038dc405fc3a9a1f6f52ef2cd05aa0db3a55d7b2cd519f82ab2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1756941797586474361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da274b8a208b28238ff49cabc6ea86f7,},Annotations:ma
p[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d9ba21aeb9e61b642165dea1fdd212f77b6e275739db9071030809dd11cd59,PodSandboxId:2c68110925288275611d4d5a41bc268f71772fc7148231b6af380e6a6ef8f8ef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1756941797584515439,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-600653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d15ac08c376ef7822b340dd6f5bd45e6,},Annotations:map[string]st
ring{io.kubernetes.container.hash: dd03708f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d52ebd9b-f835-4623-b2ca-b9959cdd6d5c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d0b2ed645cebd       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   9aca7b221675c       coredns-6d4b75cb6d-wphgl
	bab83ac43c035       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       2                   6ea316102fadb       storage-provisioner
	ed62382914a8c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   a78b8e684f429       kube-proxy-kzg7w
	cbb020945d52b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   6ea316102fadb       storage-provisioner
	67d7eba01f57d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   2e8d163aceb4b       kube-scheduler-test-preload-600653
	3ddfb86165c3f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   69b4b777b6d8e       etcd-test-preload-600653
	063d831580eed       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   87c0d92950a69       kube-controller-manager-test-preload-600653
	39d9ba21aeb9e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   2c68110925288       kube-apiserver-test-preload-600653
	
	
	==> coredns [d0b2ed645cebd48e8edf4c92ebb1b3d751cffe018c326b062535af463fa83de9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47224 - 34875 "HINFO IN 6729509152032849446.5554060105494863275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027238885s
	
	
	==> describe nodes <==
	Name:               test-preload-600653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-600653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=test-preload-600653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_03T23_21_59_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:21:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-600653
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:23:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:23:31 +0000   Wed, 03 Sep 2025 23:21:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:23:31 +0000   Wed, 03 Sep 2025 23:21:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:23:31 +0000   Wed, 03 Sep 2025 23:21:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:23:31 +0000   Wed, 03 Sep 2025 23:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    test-preload-600653
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a5daec20e674a1888c176f20c9c3179
	  System UUID:                8a5daec2-0e67-4a18-88c1-76f20c9c3179
	  Boot ID:                    5a32fbe8-a7c2-4e7c-a63a-2aab879ab8e3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wphgl                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     84s
	  kube-system                 etcd-test-preload-600653                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-600653             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-600653    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-kzg7w                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-test-preload-600653             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node test-preload-600653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node test-preload-600653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node test-preload-600653 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                kubelet          Node test-preload-600653 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node test-preload-600653 event: Registered Node test-preload-600653 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-600653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-600653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-600653 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-600653 event: Registered Node test-preload-600653 in Controller
	
	
	==> dmesg <==
	[Sep 3 23:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005063] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.970548] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 3 23:23] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.097193] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.900066] kauditd_printk_skb: 226 callbacks suppressed
	[  +0.000089] kauditd_printk_skb: 120 callbacks suppressed
	
	
	==> etcd [3ddfb86165c3ff5ab269a5ca634a638b8c966b44e606038961aad05d27be5f98] <==
	{"level":"info","ts":"2025-09-03T23:23:18.136Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b546310005a4f8aa","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-09-03T23:23:18.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa switched to configuration voters=(13062181645399161002)"}
	{"level":"info","ts":"2025-09-03T23:23:18.137Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","added-peer-id":"b546310005a4f8aa","added-peer-peer-urls":["https://192.168.39.11:2380"]}
	{"level":"info","ts":"2025-09-03T23:23:18.137Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-03T23:23:18.137Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-03T23:23:18.146Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b546310005a4f8aa","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-09-03T23:23:18.151Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-03T23:23:18.151Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b546310005a4f8aa","initial-advertise-peer-urls":["https://192.168.39.11:2380"],"listen-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-03T23:23:18.151Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-03T23:23:18.151Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2025-09-03T23:23:18.151Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgPreVoteResp from b546310005a4f8aa at term 2"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became candidate at term 3"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgVoteResp from b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became leader at term 3"}
	{"level":"info","ts":"2025-09-03T23:23:19.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b546310005a4f8aa elected leader b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2025-09-03T23:23:19.097Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b546310005a4f8aa","local-member-attributes":"{Name:test-preload-600653 ClientURLs:[https://192.168.39.11:2379]}","request-path":"/0/members/b546310005a4f8aa/attributes","cluster-id":"7cea85d65aab3581","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-03T23:23:19.097Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-03T23:23:19.098Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-03T23:23:19.098Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-03T23:23:19.098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-03T23:23:19.099Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-03T23:23:19.100Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.11:2379"}
	
	
	==> kernel <==
	 23:23:35 up 0 min,  0 users,  load average: 0.87, 0.23, 0.08
	Linux test-preload-600653 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [39d9ba21aeb9e61b642165dea1fdd212f77b6e275739db9071030809dd11cd59] <==
	I0903 23:23:21.284369       1 controller.go:85] Starting OpenAPI V3 controller
	I0903 23:23:21.284451       1 naming_controller.go:291] Starting NamingConditionController
	I0903 23:23:21.284542       1 establishing_controller.go:76] Starting EstablishingController
	I0903 23:23:21.284575       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0903 23:23:21.284645       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0903 23:23:21.284726       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0903 23:23:21.340330       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0903 23:23:21.342020       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0903 23:23:21.344569       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0903 23:23:21.347554       1 cache.go:39] Caches are synced for autoregister controller
	I0903 23:23:21.349130       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0903 23:23:21.357911       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0903 23:23:21.359362       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0903 23:23:21.368985       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0903 23:23:21.384969       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0903 23:23:21.949601       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0903 23:23:22.244433       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0903 23:23:22.753878       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0903 23:23:23.064549       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0903 23:23:23.074264       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0903 23:23:23.105585       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0903 23:23:23.122816       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0903 23:23:23.127837       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0903 23:23:34.474814       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0903 23:23:34.478122       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [063d831580eed8259f62d2e8a00d3febdac336e544f8d15a22bb9945cc6dc22d] <==
	I0903 23:23:34.413940       1 shared_informer.go:262] Caches are synced for HPA
	I0903 23:23:34.417335       1 shared_informer.go:262] Caches are synced for stateful set
	I0903 23:23:34.421614       1 shared_informer.go:262] Caches are synced for persistent volume
	I0903 23:23:34.437345       1 shared_informer.go:262] Caches are synced for PVC protection
	I0903 23:23:34.440357       1 shared_informer.go:262] Caches are synced for daemon sets
	I0903 23:23:34.441993       1 shared_informer.go:262] Caches are synced for job
	I0903 23:23:34.444310       1 shared_informer.go:262] Caches are synced for deployment
	I0903 23:23:34.447196       1 shared_informer.go:262] Caches are synced for ephemeral
	I0903 23:23:34.450666       1 shared_informer.go:262] Caches are synced for resource quota
	I0903 23:23:34.452868       1 shared_informer.go:262] Caches are synced for endpoint
	I0903 23:23:34.457891       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0903 23:23:34.459914       1 shared_informer.go:262] Caches are synced for disruption
	I0903 23:23:34.459935       1 disruption.go:371] Sending events to api server.
	I0903 23:23:34.463009       1 shared_informer.go:262] Caches are synced for taint
	I0903 23:23:34.463151       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0903 23:23:34.463222       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-600653. Assuming now as a timestamp.
	I0903 23:23:34.463262       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0903 23:23:34.463269       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0903 23:23:34.463629       1 event.go:294] "Event occurred" object="test-preload-600653" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-600653 event: Registered Node test-preload-600653 in Controller"
	I0903 23:23:34.467465       1 shared_informer.go:262] Caches are synced for GC
	I0903 23:23:34.482824       1 shared_informer.go:262] Caches are synced for attach detach
	I0903 23:23:34.495418       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0903 23:23:34.891202       1 shared_informer.go:262] Caches are synced for garbage collector
	I0903 23:23:34.909869       1 shared_informer.go:262] Caches are synced for garbage collector
	I0903 23:23:34.909886       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [ed62382914a8c5c61093c9cc05e95df7be1be41079de1b6a06f9a379733a3253] <==
	I0903 23:23:22.723161       1 node.go:163] Successfully retrieved node IP: 192.168.39.11
	I0903 23:23:22.723234       1 server_others.go:138] "Detected node IP" address="192.168.39.11"
	I0903 23:23:22.723296       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0903 23:23:22.748615       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0903 23:23:22.748643       1 server_others.go:206] "Using iptables Proxier"
	I0903 23:23:22.748676       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0903 23:23:22.748994       1 server.go:661] "Version info" version="v1.24.4"
	I0903 23:23:22.749016       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 23:23:22.749779       1 config.go:317] "Starting service config controller"
	I0903 23:23:22.749812       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0903 23:23:22.749833       1 config.go:226] "Starting endpoint slice config controller"
	I0903 23:23:22.749836       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0903 23:23:22.750773       1 config.go:444] "Starting node config controller"
	I0903 23:23:22.750797       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0903 23:23:22.850437       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0903 23:23:22.850487       1 shared_informer.go:262] Caches are synced for service config
	I0903 23:23:22.852201       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [67d7eba01f57dc86891d25cc88650e8a9f69084d61d14d1e4d04edf9c7259389] <==
	I0903 23:23:18.398098       1 serving.go:348] Generated self-signed cert in-memory
	W0903 23:23:21.289919       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0903 23:23:21.291868       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0903 23:23:21.291975       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0903 23:23:21.291996       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0903 23:23:21.324250       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0903 23:23:21.324330       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 23:23:21.327020       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0903 23:23:21.328138       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0903 23:23:21.330087       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0903 23:23:21.328159       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0903 23:23:21.430730       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.788666    1465 topology_manager.go:200] "Topology Admit Handler"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.789441    1465 topology_manager.go:200] "Topology Admit Handler"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: E0903 23:23:21.789880    1465 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wphgl" podUID=32eab7b4-a2f4-46b5-b345-cf864edce160
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.833802    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f286793c-3cd3-4f54-b061-76a18ad9cf39-kube-proxy\") pod \"kube-proxy-kzg7w\" (UID: \"f286793c-3cd3-4f54-b061-76a18ad9cf39\") " pod="kube-system/kube-proxy-kzg7w"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834183    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f286793c-3cd3-4f54-b061-76a18ad9cf39-lib-modules\") pod \"kube-proxy-kzg7w\" (UID: \"f286793c-3cd3-4f54-b061-76a18ad9cf39\") " pod="kube-system/kube-proxy-kzg7w"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834248    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hwf\" (UniqueName: \"kubernetes.io/projected/ec348371-2370-47a9-af61-16853b146032-kube-api-access-b7hwf\") pod \"storage-provisioner\" (UID: \"ec348371-2370-47a9-af61-16853b146032\") " pod="kube-system/storage-provisioner"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834376    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume\") pod \"coredns-6d4b75cb6d-wphgl\" (UID: \"32eab7b4-a2f4-46b5-b345-cf864edce160\") " pod="kube-system/coredns-6d4b75cb6d-wphgl"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834428    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f286793c-3cd3-4f54-b061-76a18ad9cf39-xtables-lock\") pod \"kube-proxy-kzg7w\" (UID: \"f286793c-3cd3-4f54-b061-76a18ad9cf39\") " pod="kube-system/kube-proxy-kzg7w"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834450    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnk48\" (UniqueName: \"kubernetes.io/projected/f286793c-3cd3-4f54-b061-76a18ad9cf39-kube-api-access-fnk48\") pod \"kube-proxy-kzg7w\" (UID: \"f286793c-3cd3-4f54-b061-76a18ad9cf39\") " pod="kube-system/kube-proxy-kzg7w"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834468    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45hc\" (UniqueName: \"kubernetes.io/projected/32eab7b4-a2f4-46b5-b345-cf864edce160-kube-api-access-t45hc\") pod \"coredns-6d4b75cb6d-wphgl\" (UID: \"32eab7b4-a2f4-46b5-b345-cf864edce160\") " pod="kube-system/coredns-6d4b75cb6d-wphgl"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834486    1465 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ec348371-2370-47a9-af61-16853b146032-tmp\") pod \"storage-provisioner\" (UID: \"ec348371-2370-47a9-af61-16853b146032\") " pod="kube-system/storage-provisioner"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: I0903 23:23:21.834496    1465 reconciler.go:159] "Reconciler: start to sync state"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: E0903 23:23:21.864808    1465 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: E0903 23:23:21.937978    1465 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 03 23:23:21 test-preload-600653 kubelet[1465]: E0903 23:23:21.938119    1465 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume podName:32eab7b4-a2f4-46b5-b345-cf864edce160 nodeName:}" failed. No retries permitted until 2025-09-03 23:23:22.438091632 +0000 UTC m=+5.764101455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume") pod "coredns-6d4b75cb6d-wphgl" (UID: "32eab7b4-a2f4-46b5-b345-cf864edce160") : object "kube-system"/"coredns" not registered
	Sep 03 23:23:22 test-preload-600653 kubelet[1465]: E0903 23:23:22.440696    1465 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 03 23:23:22 test-preload-600653 kubelet[1465]: E0903 23:23:22.440770    1465 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume podName:32eab7b4-a2f4-46b5-b345-cf864edce160 nodeName:}" failed. No retries permitted until 2025-09-03 23:23:23.440750603 +0000 UTC m=+6.766760405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume") pod "coredns-6d4b75cb6d-wphgl" (UID: "32eab7b4-a2f4-46b5-b345-cf864edce160") : object "kube-system"/"coredns" not registered
	Sep 03 23:23:22 test-preload-600653 kubelet[1465]: I0903 23:23:22.903590    1465 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a843c77d-f7d2-434c-a23e-9f29c8df3e51 path="/var/lib/kubelet/pods/a843c77d-f7d2-434c-a23e-9f29c8df3e51/volumes"
	Sep 03 23:23:22 test-preload-600653 kubelet[1465]: I0903 23:23:22.951379    1465 scope.go:110] "RemoveContainer" containerID="cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3"
	Sep 03 23:23:23 test-preload-600653 kubelet[1465]: E0903 23:23:23.448351    1465 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 03 23:23:23 test-preload-600653 kubelet[1465]: E0903 23:23:23.448403    1465 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume podName:32eab7b4-a2f4-46b5-b345-cf864edce160 nodeName:}" failed. No retries permitted until 2025-09-03 23:23:25.448390009 +0000 UTC m=+8.774399813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume") pod "coredns-6d4b75cb6d-wphgl" (UID: "32eab7b4-a2f4-46b5-b345-cf864edce160") : object "kube-system"/"coredns" not registered
	Sep 03 23:23:23 test-preload-600653 kubelet[1465]: E0903 23:23:23.882539    1465 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wphgl" podUID=32eab7b4-a2f4-46b5-b345-cf864edce160
	Sep 03 23:23:25 test-preload-600653 kubelet[1465]: E0903 23:23:25.462883    1465 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 03 23:23:25 test-preload-600653 kubelet[1465]: E0903 23:23:25.462968    1465 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume podName:32eab7b4-a2f4-46b5-b345-cf864edce160 nodeName:}" failed. No retries permitted until 2025-09-03 23:23:29.462952823 +0000 UTC m=+12.788962627 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/32eab7b4-a2f4-46b5-b345-cf864edce160-config-volume") pod "coredns-6d4b75cb6d-wphgl" (UID: "32eab7b4-a2f4-46b5-b345-cf864edce160") : object "kube-system"/"coredns" not registered
	Sep 03 23:23:25 test-preload-600653 kubelet[1465]: E0903 23:23:25.882816    1465 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wphgl" podUID=32eab7b4-a2f4-46b5-b345-cf864edce160
	
	
	==> storage-provisioner [bab83ac43c035ea3151b135eccaa6309dca3521704537900c4302f06560fd79d] <==
	I0903 23:23:23.048971       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0903 23:23:23.062837       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0903 23:23:23.063397       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [cbb020945d52b4b7163f70ca226f79b210d57a14fc29afa824967d7e9dbfd5e3] <==
	I0903 23:23:22.656282       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0903 23:23:22.662429       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-600653 -n test-preload-600653
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-600653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-600653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-600653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-600653: (1.000456736s)
--- FAIL: TestPreload (172.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (416.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.425659268s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-938492] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-938492" primary control-plane node in "kubernetes-upgrade-938492" cluster
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:25:35.339428  145896 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:25:35.339564  145896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:35.339571  145896 out.go:374] Setting ErrFile to fd 2...
	I0903 23:25:35.339578  145896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:35.339851  145896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:25:35.340613  145896 out.go:368] Setting JSON to false
	I0903 23:25:35.341860  145896 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7679,"bootTime":1756934256,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:25:35.341942  145896 start.go:140] virtualization: kvm guest
	I0903 23:25:35.344089  145896 out.go:179] * [kubernetes-upgrade-938492] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:25:35.345594  145896 notify.go:220] Checking for updates...
	I0903 23:25:35.346092  145896 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:25:35.347570  145896 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:25:35.348733  145896 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:25:35.349801  145896 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:25:35.350933  145896 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:25:35.352063  145896 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:25:35.353118  145896 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:25:35.396962  145896 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 23:25:35.398058  145896 start.go:304] selected driver: kvm2
	I0903 23:25:35.398079  145896 start.go:918] validating driver "kvm2" against <nil>
	I0903 23:25:35.398094  145896 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:25:35.399089  145896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:25:35.417240  145896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:25:35.438988  145896 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:25:35.439048  145896 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:25:35.439264  145896 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 23:25:35.439284  145896 cni.go:84] Creating CNI manager for ""
	I0903 23:25:35.439328  145896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:25:35.439337  145896 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 23:25:35.439393  145896 start.go:348] cluster config:
	{Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:25:35.439508  145896 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:25:35.440944  145896 out.go:179] * Starting "kubernetes-upgrade-938492" primary control-plane node in "kubernetes-upgrade-938492" cluster
	I0903 23:25:35.441854  145896 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:25:35.441896  145896 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:25:35.441910  145896 cache.go:58] Caching tarball of preloaded images
	I0903 23:25:35.441991  145896 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:25:35.442004  145896 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:25:35.442435  145896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/config.json ...
	I0903 23:25:35.442473  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/config.json: {Name:mk21e6bce4cd9aa5e55260f2e795554dd60c5c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:25:35.442631  145896 start.go:360] acquireMachinesLock for kubernetes-upgrade-938492: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:26:02.094265  145896 start.go:364] duration metric: took 26.65160649s to acquireMachinesLock for "kubernetes-upgrade-938492"
	I0903 23:26:02.094362  145896 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:26:02.094474  145896 start.go:125] createHost starting for "" (driver="kvm2")
	I0903 23:26:02.096499  145896 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:26:02.096698  145896 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:26:02.096739  145896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:26:02.113292  145896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0903 23:26:02.113746  145896 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:26:02.114239  145896 main.go:141] libmachine: Using API Version  1
	I0903 23:26:02.114265  145896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:26:02.114740  145896 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:26:02.114981  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:26:02.115136  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:02.115305  145896 start.go:159] libmachine.API.Create for "kubernetes-upgrade-938492" (driver="kvm2")
	I0903 23:26:02.115336  145896 client.go:168] LocalClient.Create starting
	I0903 23:26:02.115373  145896 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem
	I0903 23:26:02.115409  145896 main.go:141] libmachine: Decoding PEM data...
	I0903 23:26:02.115426  145896 main.go:141] libmachine: Parsing certificate...
	I0903 23:26:02.115495  145896 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem
	I0903 23:26:02.115520  145896 main.go:141] libmachine: Decoding PEM data...
	I0903 23:26:02.115537  145896 main.go:141] libmachine: Parsing certificate...
	I0903 23:26:02.115561  145896 main.go:141] libmachine: Running pre-create checks...
	I0903 23:26:02.115575  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .PreCreateCheck
	I0903 23:26:02.115952  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetConfigRaw
	I0903 23:26:02.116416  145896 main.go:141] libmachine: Creating machine...
	I0903 23:26:02.116435  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .Create
	I0903 23:26:02.116588  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) creating KVM machine...
	I0903 23:26:02.116609  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) creating network...
	I0903 23:26:02.117831  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found existing default KVM network
	I0903 23:26:02.118755  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.118578  148340 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:ae:a4} reservation:<nil>}
	I0903 23:26:02.119485  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.119387  148340 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000246070}
	I0903 23:26:02.119509  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | created network xml: 
	I0903 23:26:02.119521  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | <network>
	I0903 23:26:02.119530  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   <name>mk-kubernetes-upgrade-938492</name>
	I0903 23:26:02.119550  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   <dns enable='no'/>
	I0903 23:26:02.119557  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   
	I0903 23:26:02.119568  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0903 23:26:02.119586  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |     <dhcp>
	I0903 23:26:02.119601  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0903 23:26:02.119615  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |     </dhcp>
	I0903 23:26:02.119623  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   </ip>
	I0903 23:26:02.119636  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG |   
	I0903 23:26:02.119672  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | </network>
	I0903 23:26:02.119707  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | 
	I0903 23:26:02.124762  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | trying to create private KVM network mk-kubernetes-upgrade-938492 192.168.50.0/24...
	I0903 23:26:02.194620  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | private KVM network mk-kubernetes-upgrade-938492 192.168.50.0/24 created
	I0903 23:26:02.194661  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.194587  148340 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:26:02.194682  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting up store path in /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492 ...
	I0903 23:26:02.194703  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) building disk image from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 23:26:02.194728  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Downloading /home/jenkins/minikube-integration/21341-109162/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:26:02.492278  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.492153  148340 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa...
	I0903 23:26:02.769690  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.769513  148340 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/kubernetes-upgrade-938492.rawdisk...
	I0903 23:26:02.769731  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | Writing magic tar header
	I0903 23:26:02.769749  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | Writing SSH key tar header
	I0903 23:26:02.769763  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:02.769670  148340 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492 ...
	I0903 23:26:02.769856  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492
	I0903 23:26:02.769892  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492 (perms=drwx------)
	I0903 23:26:02.769906  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines
	I0903 23:26:02.769921  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines (perms=drwxr-xr-x)
	I0903 23:26:02.769936  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube (perms=drwxr-xr-x)
	I0903 23:26:02.769954  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins/minikube-integration/21341-109162 (perms=drwxrwxr-x)
	I0903 23:26:02.769963  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0903 23:26:02.769969  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:26:02.769984  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162
	I0903 23:26:02.769997  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0903 23:26:02.770007  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0903 23:26:02.770020  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home/jenkins
	I0903 23:26:02.770031  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | checking permissions on dir: /home
	I0903 23:26:02.770042  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | skipping /home - not owner
	I0903 23:26:02.770053  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) creating domain...
	I0903 23:26:02.771089  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) define libvirt domain using xml: 
	I0903 23:26:02.771118  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) <domain type='kvm'>
	I0903 23:26:02.771129  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <name>kubernetes-upgrade-938492</name>
	I0903 23:26:02.771142  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <memory unit='MiB'>3072</memory>
	I0903 23:26:02.771153  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <vcpu>2</vcpu>
	I0903 23:26:02.771164  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <features>
	I0903 23:26:02.771172  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <acpi/>
	I0903 23:26:02.771189  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <apic/>
	I0903 23:26:02.771201  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <pae/>
	I0903 23:26:02.771212  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     
	I0903 23:26:02.771235  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   </features>
	I0903 23:26:02.771260  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <cpu mode='host-passthrough'>
	I0903 23:26:02.771269  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   
	I0903 23:26:02.771276  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   </cpu>
	I0903 23:26:02.771292  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <os>
	I0903 23:26:02.771302  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <type>hvm</type>
	I0903 23:26:02.771311  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <boot dev='cdrom'/>
	I0903 23:26:02.771320  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <boot dev='hd'/>
	I0903 23:26:02.771346  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <bootmenu enable='no'/>
	I0903 23:26:02.771361  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   </os>
	I0903 23:26:02.771368  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   <devices>
	I0903 23:26:02.771377  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <disk type='file' device='cdrom'>
	I0903 23:26:02.771393  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/boot2docker.iso'/>
	I0903 23:26:02.771401  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <target dev='hdc' bus='scsi'/>
	I0903 23:26:02.771414  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <readonly/>
	I0903 23:26:02.771420  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </disk>
	I0903 23:26:02.771426  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <disk type='file' device='disk'>
	I0903 23:26:02.771434  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0903 23:26:02.771443  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/kubernetes-upgrade-938492.rawdisk'/>
	I0903 23:26:02.771450  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <target dev='hda' bus='virtio'/>
	I0903 23:26:02.771455  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </disk>
	I0903 23:26:02.771465  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <interface type='network'>
	I0903 23:26:02.771471  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <source network='mk-kubernetes-upgrade-938492'/>
	I0903 23:26:02.771478  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <model type='virtio'/>
	I0903 23:26:02.771483  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </interface>
	I0903 23:26:02.771489  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <interface type='network'>
	I0903 23:26:02.771495  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <source network='default'/>
	I0903 23:26:02.771502  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <model type='virtio'/>
	I0903 23:26:02.771506  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </interface>
	I0903 23:26:02.771513  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <serial type='pty'>
	I0903 23:26:02.771518  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <target port='0'/>
	I0903 23:26:02.771522  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </serial>
	I0903 23:26:02.771527  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <console type='pty'>
	I0903 23:26:02.771539  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <target type='serial' port='0'/>
	I0903 23:26:02.771546  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </console>
	I0903 23:26:02.771557  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     <rng model='virtio'>
	I0903 23:26:02.771569  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)       <backend model='random'>/dev/random</backend>
	I0903 23:26:02.771580  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     </rng>
	I0903 23:26:02.771591  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     
	I0903 23:26:02.771599  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)     
	I0903 23:26:02.771607  145896 main.go:141] libmachine: (kubernetes-upgrade-938492)   </devices>
	I0903 23:26:02.771614  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) </domain>
	I0903 23:26:02.771621  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) 
	I0903 23:26:02.775522  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:b6:22:69 in network default
	I0903 23:26:02.776066  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) starting domain...
	I0903 23:26:02.776088  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) ensuring networks are active...
	I0903 23:26:02.776099  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:02.776711  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Ensuring network default is active
	I0903 23:26:02.777094  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Ensuring network mk-kubernetes-upgrade-938492 is active
	I0903 23:26:02.777605  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) getting domain XML...
	I0903 23:26:02.778225  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) creating domain...
	I0903 23:26:04.095598  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) waiting for IP...
	I0903 23:26:04.096728  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.097270  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.097326  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:04.097284  148340 retry.go:31] will retry after 297.048084ms: waiting for domain to come up
	I0903 23:26:04.396077  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.396647  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.396679  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:04.396613  148340 retry.go:31] will retry after 266.640152ms: waiting for domain to come up
	I0903 23:26:04.665199  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.665742  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:04.665777  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:04.665712  148340 retry.go:31] will retry after 381.591847ms: waiting for domain to come up
	I0903 23:26:05.049510  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:05.050182  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:05.050214  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:05.050158  148340 retry.go:31] will retry after 584.935192ms: waiting for domain to come up
	I0903 23:26:05.637149  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:05.637702  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:05.637733  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:05.637677  148340 retry.go:31] will retry after 591.426012ms: waiting for domain to come up
	I0903 23:26:06.230705  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:06.231161  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:06.231238  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:06.231151  148340 retry.go:31] will retry after 875.045015ms: waiting for domain to come up
	I0903 23:26:07.107470  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:07.107944  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:07.108010  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:07.107916  148340 retry.go:31] will retry after 1.039005816s: waiting for domain to come up
	I0903 23:26:08.148480  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:08.148931  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:08.148976  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:08.148917  148340 retry.go:31] will retry after 1.421476658s: waiting for domain to come up
	I0903 23:26:09.571751  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:09.572179  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:09.572209  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:09.572137  148340 retry.go:31] will retry after 1.747322364s: waiting for domain to come up
	I0903 23:26:11.322227  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:11.322793  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:11.322821  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:11.322754  148340 retry.go:31] will retry after 1.621515384s: waiting for domain to come up
	I0903 23:26:12.945947  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:12.946397  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:12.946447  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:12.946370  148340 retry.go:31] will retry after 1.966215695s: waiting for domain to come up
	I0903 23:26:14.914655  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:14.915104  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:14.915125  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:14.915065  148340 retry.go:31] will retry after 2.684717373s: waiting for domain to come up
	I0903 23:26:17.600872  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:17.601409  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:17.601438  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:17.601355  148340 retry.go:31] will retry after 3.157005052s: waiting for domain to come up
	I0903 23:26:20.761852  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:20.762323  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find current IP address of domain kubernetes-upgrade-938492 in network mk-kubernetes-upgrade-938492
	I0903 23:26:20.762349  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | I0903 23:26:20.762293  148340 retry.go:31] will retry after 3.54303886s: waiting for domain to come up
	I0903 23:26:24.308022  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.308463  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) found domain IP: 192.168.50.53
	I0903 23:26:24.308487  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) reserving static IP address...
	I0903 23:26:24.308501  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has current primary IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.308844  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-938492", mac: "52:54:00:8d:7e:6a", ip: "192.168.50.53"} in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.382942  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | Getting to WaitForSSH function...
	I0903 23:26:24.382985  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) reserved static IP address 192.168.50.53 for domain kubernetes-upgrade-938492
	I0903 23:26:24.383003  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) waiting for SSH...
	I0903 23:26:24.385988  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.386362  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:24.386398  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.386495  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | Using SSH client type: external
	I0903 23:26:24.386527  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa (-rw-------)
	I0903 23:26:24.386584  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:26:24.386604  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | About to run SSH command:
	I0903 23:26:24.386620  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | exit 0
	I0903 23:26:24.514023  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | SSH cmd err, output: <nil>: 
	I0903 23:26:24.514315  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) KVM machine creation complete
	I0903 23:26:24.514683  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetConfigRaw
	I0903 23:26:24.515273  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:24.515495  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:24.515674  145896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0903 23:26:24.515690  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetState
	I0903 23:26:24.517574  145896 main.go:141] libmachine: Detecting operating system of created instance...
	I0903 23:26:24.517593  145896 main.go:141] libmachine: Waiting for SSH to be available...
	I0903 23:26:24.517602  145896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0903 23:26:24.517610  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:24.520889  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.521287  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:24.521319  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.521492  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:24.521688  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.521874  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.522041  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:24.522220  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:24.522550  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:24.522565  145896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0903 23:26:24.632736  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:26:24.632765  145896 main.go:141] libmachine: Detecting the provisioner...
	I0903 23:26:24.632774  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:24.635545  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.635995  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:24.636020  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.636130  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:24.636327  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.636531  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.636761  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:24.636952  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:24.637150  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:24.637160  145896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0903 23:26:24.750290  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0903 23:26:24.750368  145896 main.go:141] libmachine: found compatible host: buildroot
	I0903 23:26:24.750379  145896 main.go:141] libmachine: Provisioning with buildroot...
	I0903 23:26:24.750389  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:26:24.750690  145896 buildroot.go:166] provisioning hostname "kubernetes-upgrade-938492"
	I0903 23:26:24.750724  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:26:24.750923  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:24.753833  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.754177  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:24.754202  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.754372  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:24.754593  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.754763  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.754901  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:24.755052  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:24.755242  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:24.755253  145896 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-938492 && echo "kubernetes-upgrade-938492" | sudo tee /etc/hostname
	I0903 23:26:24.881529  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-938492
	
	I0903 23:26:24.881558  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:24.884407  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.884762  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:24.884790  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:24.884958  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:24.885143  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.885374  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:24.885550  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:24.885815  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:24.886013  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:24.886030  145896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-938492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-938492/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-938492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:26:25.011858  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:26:25.011899  145896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:26:25.011938  145896 buildroot.go:174] setting up certificates
	I0903 23:26:25.011956  145896 provision.go:84] configureAuth start
	I0903 23:26:25.011975  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:26:25.012300  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:26:25.015570  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.015969  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.016003  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.016146  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:25.018194  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.018525  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.018561  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.018769  145896 provision.go:143] copyHostCerts
	I0903 23:26:25.018835  145896 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:26:25.018858  145896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:26:25.018931  145896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:26:25.019053  145896 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:26:25.019067  145896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:26:25.019105  145896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:26:25.019180  145896 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:26:25.019189  145896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:26:25.019207  145896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:26:25.019260  145896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-938492 san=[127.0.0.1 192.168.50.53 kubernetes-upgrade-938492 localhost minikube]
	I0903 23:26:25.413037  145896 provision.go:177] copyRemoteCerts
	I0903 23:26:25.413094  145896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:26:25.413120  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:25.415963  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.416275  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.416307  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.416491  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:25.416705  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:25.416867  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:25.417006  145896 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:26:25.509846  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:26:25.539552  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0903 23:26:25.566908  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:26:25.593761  145896 provision.go:87] duration metric: took 581.786163ms to configureAuth
	I0903 23:26:25.593797  145896 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:26:25.594011  145896 config.go:182] Loaded profile config "kubernetes-upgrade-938492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:26:25.594108  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:25.597192  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.597637  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.597672  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.597820  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:25.598043  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:25.598223  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:25.598413  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:25.598620  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:25.598816  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:25.598831  145896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:26:25.859296  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:26:25.859341  145896 main.go:141] libmachine: Checking connection to Docker...
	I0903 23:26:25.859354  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetURL
	I0903 23:26:25.860920  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | using libvirt version 6000000
	I0903 23:26:25.863336  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.863660  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.863693  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.863830  145896 main.go:141] libmachine: Docker is up and running!
	I0903 23:26:25.863847  145896 main.go:141] libmachine: Reticulating splines...
	I0903 23:26:25.863856  145896 client.go:171] duration metric: took 23.748508751s to LocalClient.Create
	I0903 23:26:25.863875  145896 start.go:167] duration metric: took 23.748572339s to libmachine.API.Create "kubernetes-upgrade-938492"
	I0903 23:26:25.863885  145896 start.go:293] postStartSetup for "kubernetes-upgrade-938492" (driver="kvm2")
	I0903 23:26:25.863895  145896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:26:25.863913  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:25.864129  145896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:26:25.864154  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:25.866149  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.866454  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:25.866475  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:25.866653  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:25.866857  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:25.867008  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:25.867129  145896 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:26:25.954601  145896 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:26:25.959173  145896 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:26:25.959205  145896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:26:25.959281  145896 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:26:25.959350  145896 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:26:25.959434  145896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:26:25.971691  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:26:26.001795  145896 start.go:296] duration metric: took 137.890276ms for postStartSetup
	I0903 23:26:26.001870  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetConfigRaw
	I0903 23:26:26.002483  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:26:26.005225  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.005616  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:26.005641  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.005915  145896 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/config.json ...
	I0903 23:26:26.006160  145896 start.go:128] duration metric: took 23.911671331s to createHost
	I0903 23:26:26.006191  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:26.008559  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.008935  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:26.008967  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.009112  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:26.009312  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:26.009502  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:26.009659  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:26.009863  145896 main.go:141] libmachine: Using SSH client type: native
	I0903 23:26:26.010141  145896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:26:26.010160  145896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:26:26.126253  145896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756941986.100994580
	
	I0903 23:26:26.126295  145896 fix.go:216] guest clock: 1756941986.100994580
	I0903 23:26:26.126303  145896 fix.go:229] Guest: 2025-09-03 23:26:26.10099458 +0000 UTC Remote: 2025-09-03 23:26:26.006175348 +0000 UTC m=+50.713437405 (delta=94.819232ms)
	I0903 23:26:26.126325  145896 fix.go:200] guest clock delta is within tolerance: 94.819232ms
	I0903 23:26:26.126332  145896 start.go:83] releasing machines lock for "kubernetes-upgrade-938492", held for 24.032008646s
	I0903 23:26:26.126368  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:26.126666  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:26:26.129888  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.130286  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:26.130324  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.130439  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:26.130929  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:26.131141  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:26:26.131245  145896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:26:26.131303  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:26.131386  145896 ssh_runner.go:195] Run: cat /version.json
	I0903 23:26:26.131418  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:26:26.134146  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.134367  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.134543  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:26.134573  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.134705  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:26.134806  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:26.134831  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:26.134867  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:26.134994  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:26:26.135015  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:26.135178  145896 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:26:26.135211  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:26:26.135378  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:26:26.135535  145896 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:26:26.229617  145896 ssh_runner.go:195] Run: systemctl --version
	I0903 23:26:26.262205  145896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:26:26.425148  145896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:26:26.432801  145896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:26:26.432878  145896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:26:26.454029  145896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:26:26.454051  145896 start.go:495] detecting cgroup driver to use...
	I0903 23:26:26.454110  145896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:26:26.477927  145896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:26:26.496298  145896 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:26:26.496369  145896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:26:26.513923  145896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:26:26.530231  145896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:26:26.695667  145896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:26:26.856965  145896 docker.go:234] disabling docker service ...
	I0903 23:26:26.857051  145896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:26:26.875455  145896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:26:26.894279  145896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:26:27.130718  145896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:26:27.305232  145896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:26:27.326300  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:26:27.354620  145896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:26:27.354708  145896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:26:27.371956  145896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:26:27.372030  145896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:26:27.385003  145896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:26:27.398184  145896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:26:27.411133  145896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:26:27.429312  145896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:26:27.440518  145896 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:26:27.440612  145896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:26:27.463835  145896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:26:27.480594  145896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:26:27.646585  145896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:26:27.765639  145896 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:26:27.765700  145896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:26:27.770840  145896 start.go:563] Will wait 60s for crictl version
	I0903 23:26:27.770899  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:27.774806  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:26:27.828495  145896 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:26:27.828616  145896 ssh_runner.go:195] Run: crio --version
	I0903 23:26:27.871384  145896 ssh_runner.go:195] Run: crio --version
	I0903 23:26:27.905986  145896 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:26:27.907135  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:26:27.910769  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:27.911300  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:26:17 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:26:27.911333  145896 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:26:27.911626  145896 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0903 23:26:27.917499  145896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:26:27.933172  145896 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disabl
eCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:26:27.933336  145896 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:26:27.933432  145896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:26:27.976288  145896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:26:27.976382  145896 ssh_runner.go:195] Run: which lz4
	I0903 23:26:27.981111  145896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:26:27.986084  145896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:26:27.986121  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:26:29.688340  145896 crio.go:462] duration metric: took 1.707267135s to copy over tarball
	I0903 23:26:29.688440  145896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:26:31.672437  145896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.983956948s)
	I0903 23:26:31.672481  145896 crio.go:469] duration metric: took 1.984103838s to extract the tarball
	I0903 23:26:31.672492  145896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:26:31.724856  145896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:26:31.779447  145896 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:26:31.779485  145896 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:26:31.779549  145896 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:26:31.779582  145896 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:31.779604  145896 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:26:31.779616  145896 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:31.779626  145896 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:31.779643  145896 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:26:31.779674  145896 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:31.779583  145896 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:31.781372  145896 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:26:31.781373  145896 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:31.781461  145896 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:31.781483  145896 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:26:31.781498  145896 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:26:31.781372  145896 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:31.781372  145896 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:31.782034  145896 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:31.975246  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:31.975246  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:31.988421  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:31.991591  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:26:32.027379  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:32.039392  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:26:32.063074  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:32.073948  145896 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:26:32.073977  145896 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:26:32.074006  145896 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:32.074028  145896 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:32.074070  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.074077  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.121081  145896 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:26:32.121115  145896 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:26:32.121141  145896 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:32.121152  145896 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:26:32.121198  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.121199  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.201609  145896 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:26:32.201662  145896 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:32.201672  145896 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:26:32.201705  145896 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:26:32.201716  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.201756  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.201787  145896 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:26:32.201819  145896 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:32.201832  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:32.201855  145896 ssh_runner.go:195] Run: which crictl
	I0903 23:26:32.201863  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:32.201872  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:26:32.201904  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:32.290121  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:32.290231  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:32.290252  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:26:32.290259  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:26:32.292333  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:32.292420  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:32.292469  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:32.426546  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:26:32.426610  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:32.426657  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:26:32.426680  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:26:32.432163  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:32.432249  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:26:32.432287  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:26:32.540701  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:26:32.564434  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:26:32.564511  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:26:32.564522  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:26:32.564630  145896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:26:32.564675  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:26:32.564634  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:26:32.608115  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:26:32.625488  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:26:32.625528  145896 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:26:33.071254  145896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:26:33.217246  145896 cache_images.go:93] duration metric: took 1.437737731s to LoadCachedImages
	W0903 23:26:33.217356  145896 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0903 23:26:33.217374  145896 kubeadm.go:926] updating node { 192.168.50.53 8443 v1.20.0 crio true true} ...
	I0903 23:26:33.217504  145896 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-938492 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:26:33.217593  145896 ssh_runner.go:195] Run: crio config
	I0903 23:26:33.264718  145896 cni.go:84] Creating CNI manager for ""
	I0903 23:26:33.264737  145896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:26:33.264747  145896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:26:33.264771  145896 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-938492 NodeName:kubernetes-upgrade-938492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:26:33.264924  145896 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-938492"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:26:33.265001  145896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:26:33.276683  145896 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:26:33.276763  145896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:26:33.288068  145896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0903 23:26:33.307321  145896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:26:33.326630  145896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0903 23:26:33.346049  145896 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0903 23:26:33.349954  145896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:26:33.363523  145896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:26:33.519366  145896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:26:33.551772  145896 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492 for IP: 192.168.50.53
	I0903 23:26:33.551810  145896 certs.go:194] generating shared ca certs ...
	I0903 23:26:33.551838  145896 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:33.552060  145896 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:26:33.552132  145896 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:26:33.552153  145896 certs.go:256] generating profile certs ...
	I0903 23:26:33.552236  145896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.key
	I0903 23:26:33.552260  145896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.crt with IP's: []
	I0903 23:26:33.825379  145896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.crt ...
	I0903 23:26:33.825430  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.crt: {Name:mk99aa8de4f902d0984abe5d180b62f04ab5bc13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:33.825626  145896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.key ...
	I0903 23:26:33.825646  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.key: {Name:mk16fd43b785815d992813e36965970de2b52f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:33.825761  145896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key.e17636b7
	I0903 23:26:33.825833  145896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt.e17636b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.53]
	I0903 23:26:34.246088  145896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt.e17636b7 ...
	I0903 23:26:34.246124  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt.e17636b7: {Name:mkb6acb4d490c3efd92a3d5eb8e8cddbf1c12f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:34.246329  145896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key.e17636b7 ...
	I0903 23:26:34.246350  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key.e17636b7: {Name:mk0ae5c6786569f41d81001dc7a25b0cd7260bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:34.246452  145896 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt.e17636b7 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt
	I0903 23:26:34.246527  145896 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key.e17636b7 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key
	I0903 23:26:34.246580  145896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key
	I0903 23:26:34.246596  145896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.crt with IP's: []
	I0903 23:26:34.986848  145896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.crt ...
	I0903 23:26:34.986895  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.crt: {Name:mk6abb8ed9250acaa91b47fe4bb1f1cc9e2f90d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:34.987124  145896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key ...
	I0903 23:26:34.987143  145896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key: {Name:mk18dd216b27a3fe17c52819e7a6813ce84bc339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:26:34.987389  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:26:34.987445  145896 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:26:34.987460  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:26:34.987493  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:26:34.987526  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:26:34.987563  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:26:34.987616  145896 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:26:34.988179  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:26:35.022089  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:26:35.052578  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:26:35.079870  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:26:35.108651  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0903 23:26:35.138256  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:26:35.177019  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:26:35.209343  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:26:35.240205  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:26:35.293249  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:26:35.330543  145896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:26:35.362587  145896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:26:35.384983  145896 ssh_runner.go:195] Run: openssl version
	I0903 23:26:35.392277  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:26:35.411402  145896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:26:35.416544  145896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:26:35.416623  145896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:26:35.423988  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:26:35.436720  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:26:35.455779  145896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:26:35.460834  145896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:26:35.460911  145896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:26:35.467746  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:26:35.482353  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:26:35.495316  145896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:26:35.500690  145896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:26:35.500772  145896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:26:35.509169  145896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:26:35.523414  145896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:26:35.529618  145896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:26:35.529687  145896 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:26:35.529788  145896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:26:35.529844  145896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:26:35.574034  145896 cri.go:89] found id: ""
	I0903 23:26:35.574127  145896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:26:35.587956  145896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:26:35.599981  145896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:26:35.611846  145896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:26:35.611874  145896 kubeadm.go:157] found existing configuration files:
	
	I0903 23:26:35.611937  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:26:35.622584  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:26:35.622647  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:26:35.639126  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:26:35.654873  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:26:35.654938  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:26:35.671729  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:26:35.683445  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:26:35.683528  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:26:35.699640  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:26:35.712134  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:26:35.712217  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:26:35.723129  145896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:26:35.804873  145896 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:26:35.805057  145896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:26:35.962112  145896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:26:35.962275  145896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:26:35.962485  145896 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:26:36.184771  145896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:26:36.249466  145896 out.go:252]   - Generating certificates and keys ...
	I0903 23:26:36.249612  145896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:26:36.249709  145896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:26:36.319344  145896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:26:36.673913  145896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:26:37.064107  145896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:26:37.315133  145896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:26:37.363795  145896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:26:37.363950  145896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	I0903 23:26:37.907122  145896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:26:37.907366  145896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	I0903 23:26:37.987685  145896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:26:38.218584  145896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:26:38.334399  145896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:26:38.334689  145896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:26:38.575498  145896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:26:38.780599  145896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:26:38.842484  145896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:26:39.016832  145896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:26:39.034573  145896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:26:39.035619  145896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:26:39.035688  145896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:26:39.202215  145896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:26:39.203926  145896 out.go:252]   - Booting up control plane ...
	I0903 23:26:39.204071  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:26:39.209529  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:26:39.210736  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:26:39.211810  145896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:26:39.216901  145896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:27:19.211352  145896 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:27:19.211949  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:27:19.212486  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:27:24.212913  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:27:24.213104  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:27:34.212755  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:27:34.212933  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:27:54.212687  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:27:54.212941  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:28:34.214140  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:28:34.214470  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:28:34.214493  145896 kubeadm.go:310] 
	I0903 23:28:34.214568  145896 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:28:34.214630  145896 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:28:34.214664  145896 kubeadm.go:310] 
	I0903 23:28:34.214727  145896 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:28:34.214786  145896 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:28:34.214940  145896 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:28:34.214950  145896 kubeadm.go:310] 
	I0903 23:28:34.215139  145896 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:28:34.215190  145896 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:28:34.215237  145896 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:28:34.215246  145896 kubeadm.go:310] 
	I0903 23:28:34.215397  145896 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:28:34.215520  145896 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:28:34.215531  145896 kubeadm.go:310] 
	I0903 23:28:34.215659  145896 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:28:34.215769  145896 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:28:34.215887  145896 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:28:34.216013  145896 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:28:34.216031  145896 kubeadm.go:310] 
	I0903 23:28:34.217619  145896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:28:34.217736  145896 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:28:34.217844  145896 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0903 23:28:34.218005  145896 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-938492 localhost] and IPs [192.168.50.53 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:28:34.218052  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:28:35.921367  145896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.703276762s)
	I0903 23:28:35.921493  145896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:28:35.939939  145896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:28:35.950735  145896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:28:35.950760  145896 kubeadm.go:157] found existing configuration files:
	
	I0903 23:28:35.950809  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:28:35.961362  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:28:35.961443  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:28:35.974811  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:28:35.985058  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:28:35.985117  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:28:35.999300  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:28:36.012617  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:28:36.012685  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:28:36.027023  145896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:28:36.037712  145896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:28:36.037776  145896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:28:36.048729  145896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:28:36.270026  145896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:30:32.050706  145896 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:30:32.050815  145896 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:30:32.053310  145896 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:30:32.053412  145896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:30:32.053557  145896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:30:32.053737  145896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:30:32.053883  145896 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:30:32.054003  145896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:30:32.055478  145896 out.go:252]   - Generating certificates and keys ...
	I0903 23:30:32.055576  145896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:30:32.055659  145896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:30:32.055769  145896 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:30:32.055898  145896 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:30:32.056004  145896 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:30:32.056092  145896 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:30:32.056198  145896 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:30:32.056294  145896 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:30:32.056399  145896 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:30:32.056514  145896 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:30:32.056571  145896 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:30:32.056653  145896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:30:32.056727  145896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:30:32.056804  145896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:30:32.056896  145896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:30:32.056976  145896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:30:32.057125  145896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:30:32.057247  145896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:30:32.057303  145896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:30:32.057417  145896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:30:32.058874  145896 out.go:252]   - Booting up control plane ...
	I0903 23:30:32.058999  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:30:32.059102  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:30:32.059207  145896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:30:32.059331  145896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:30:32.059580  145896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:30:32.059640  145896 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:30:32.059723  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:30:32.059874  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:30:32.059928  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:30:32.060109  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:30:32.060167  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:30:32.060409  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:30:32.060473  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:30:32.060629  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:30:32.060686  145896 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:30:32.060834  145896 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:30:32.060841  145896 kubeadm.go:310] 
	I0903 23:30:32.060872  145896 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:30:32.060905  145896 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:30:32.060911  145896 kubeadm.go:310] 
	I0903 23:30:32.060938  145896 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:30:32.060966  145896 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:30:32.061093  145896 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:30:32.061103  145896 kubeadm.go:310] 
	I0903 23:30:32.061183  145896 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:30:32.061221  145896 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:30:32.061272  145896 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:30:32.061284  145896 kubeadm.go:310] 
	I0903 23:30:32.061422  145896 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:30:32.061530  145896 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:30:32.061541  145896 kubeadm.go:310] 
	I0903 23:30:32.061714  145896 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:30:32.061856  145896 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:30:32.061984  145896 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:30:32.062102  145896 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:30:32.062174  145896 kubeadm.go:394] duration metric: took 3m56.532494156s to StartCluster
	I0903 23:30:32.062206  145896 kubeadm.go:310] 
	I0903 23:30:32.062243  145896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:30:32.062309  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:30:32.123556  145896 cri.go:89] found id: ""
	I0903 23:30:32.123583  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.123593  145896 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:30:32.123601  145896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:30:32.123670  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:30:32.169246  145896 cri.go:89] found id: ""
	I0903 23:30:32.169278  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.169289  145896 logs.go:284] No container was found matching "etcd"
	I0903 23:30:32.169297  145896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:30:32.169357  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:30:32.210214  145896 cri.go:89] found id: ""
	I0903 23:30:32.210247  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.210258  145896 logs.go:284] No container was found matching "coredns"
	I0903 23:30:32.210266  145896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:30:32.210330  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:30:32.250195  145896 cri.go:89] found id: ""
	I0903 23:30:32.250227  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.250238  145896 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:30:32.250245  145896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:30:32.250313  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:30:32.293312  145896 cri.go:89] found id: ""
	I0903 23:30:32.293352  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.293364  145896 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:30:32.293371  145896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:30:32.293459  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:30:32.338661  145896 cri.go:89] found id: ""
	I0903 23:30:32.338687  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.338695  145896 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:30:32.338702  145896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:30:32.338758  145896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:30:32.380865  145896 cri.go:89] found id: ""
	I0903 23:30:32.380892  145896 logs.go:282] 0 containers: []
	W0903 23:30:32.380902  145896 logs.go:284] No container was found matching "kindnet"
	I0903 23:30:32.380915  145896 logs.go:123] Gathering logs for kubelet ...
	I0903 23:30:32.380930  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:30:32.441448  145896 logs.go:123] Gathering logs for dmesg ...
	I0903 23:30:32.441476  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:30:32.457650  145896 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:30:32.457687  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:30:32.542692  145896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:30:32.542724  145896 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:30:32.542742  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:30:32.653114  145896 logs.go:123] Gathering logs for container status ...
	I0903 23:30:32.653154  145896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0903 23:30:32.698812  145896 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:30:32.698873  145896 out.go:285] * 
	* 
	W0903 23:30:32.698932  145896 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:30:32.698952  145896 out.go:285] * 
	* 
	W0903 23:30:32.701138  145896 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:30:32.704211  145896 out.go:203] 
	W0903 23:30:32.705697  145896 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:30:32.705732  145896 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:30:32.705767  145896 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:30:32.707217  145896 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-938492
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-938492: (3.352976092s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-938492 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-938492 status --format={{.Host}}: exit status 7 (86.304207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.349507905s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-938492 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.415594ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-938492] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-938492
	    minikube start -p kubernetes-upgrade-938492 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9384922 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-938492 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.620547992s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-09-03 23:32:28.320934839 +0000 UTC m=+3935.288909388
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-938492 -n kubernetes-upgrade-938492
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-938492 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-938492 logs -n 25: (1.781492006s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p pause-957460 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ delete  │ -p force-systemd-env-753758                                                                                                                                                                                             │ force-systemd-env-753758  │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p stopped-upgrade-924805 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p cert-expiration-689039 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-689039    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p pause-957460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-924805 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-924805                                                                                                                                                                                               │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p NoKubernetes-561956 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio                                                                                                                │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ start   │ -p NoKubernetes-561956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                     │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:30 UTC │
	│ delete  │ -p pause-957460                                                                                                                                                                                                         │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:30 UTC │ 03 Sep 25 23:30 UTC │
	│ start   │ -p cert-options-161097 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-161097       │ jenkins │ v1.36.0 │ 03 Sep 25 23:30 UTC │ 03 Sep 25 23:31 UTC │
	│ start   │ -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:30 UTC │ 03 Sep 25 23:31 UTC │
	│ stop    │ -p kubernetes-upgrade-938492                                                                                                                                                                                            │ kubernetes-upgrade-938492 │ jenkins │ v1.36.0 │ 03 Sep 25 23:30 UTC │ 03 Sep 25 23:30 UTC │
	│ start   │ -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-938492 │ jenkins │ v1.36.0 │ 03 Sep 25 23:30 UTC │ 03 Sep 25 23:31 UTC │
	│ delete  │ -p NoKubernetes-561956                                                                                                                                                                                                  │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ start   │ -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                     │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ ssh     │ cert-options-161097 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-161097       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ ssh     │ -p cert-options-161097 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-161097       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ delete  │ -p cert-options-161097                                                                                                                                                                                                  │ cert-options-161097       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ start   │ -p auto-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-380966               │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-938492 │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-938492 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-938492 │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:32 UTC │
	│ ssh     │ -p NoKubernetes-561956 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │                     │
	│ stop    │ -p NoKubernetes-561956                                                                                                                                                                                                  │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │ 03 Sep 25 23:31 UTC │
	│ start   │ -p NoKubernetes-561956 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:31 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:31:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:31:50.110604  153807 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:31:50.110832  153807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:31:50.110836  153807 out.go:374] Setting ErrFile to fd 2...
	I0903 23:31:50.110838  153807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:31:50.111058  153807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:31:50.111560  153807 out.go:368] Setting JSON to false
	I0903 23:31:50.112529  153807 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8054,"bootTime":1756934256,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:31:50.112581  153807 start.go:140] virtualization: kvm guest
	I0903 23:31:50.114423  153807 out.go:179] * [NoKubernetes-561956] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:31:50.115745  153807 notify.go:220] Checking for updates...
	I0903 23:31:50.115778  153807 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:31:50.117054  153807 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:31:50.118311  153807 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:31:50.119735  153807 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:31:50.120921  153807 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:31:50.122156  153807 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:31:50.124000  153807 config.go:182] Loaded profile config "NoKubernetes-561956": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0903 23:31:50.124550  153807 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:31:50.124620  153807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:31:50.139874  153807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0903 23:31:50.140409  153807 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:31:50.140981  153807 main.go:141] libmachine: Using API Version  1
	I0903 23:31:50.141001  153807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:31:50.141485  153807 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:31:50.141692  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	I0903 23:31:50.141952  153807 start.go:1797] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0903 23:31:50.141978  153807 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:31:50.142399  153807 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:31:50.142443  153807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:31:50.158990  153807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0903 23:31:50.159424  153807 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:31:50.159881  153807 main.go:141] libmachine: Using API Version  1
	I0903 23:31:50.159901  153807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:31:50.160397  153807 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:31:50.160640  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	I0903 23:31:50.198711  153807 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:31:50.199861  153807 start.go:304] selected driver: kvm2
	I0903 23:31:50.199866  153807 start.go:918] validating driver "kvm2" against &{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-561956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:31:50.199936  153807 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:31:50.200253  153807 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:31:50.200321  153807 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:31:50.216730  153807 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:31:50.217535  153807 cni.go:84] Creating CNI manager for ""
	I0903 23:31:50.217575  153807 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:31:50.217625  153807 start.go:348] cluster config:
	{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-561956 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:31:50.217725  153807 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:31:50.219950  153807 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-561956
	I0903 23:31:47.527518  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:47.528023  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:47.528051  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:47.528016  153457 retry.go:31] will retry after 509.974647ms: waiting for domain to come up
	I0903 23:31:48.039382  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:48.039879  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:48.039897  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:48.039818  153457 retry.go:31] will retry after 683.634331ms: waiting for domain to come up
	I0903 23:31:48.724793  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:48.727369  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:48.727391  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:48.727343  153457 retry.go:31] will retry after 753.929049ms: waiting for domain to come up
	I0903 23:31:49.483326  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:49.483839  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:49.483918  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:49.483845  153457 retry.go:31] will retry after 1.191950603s: waiting for domain to come up
	I0903 23:31:50.677414  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:50.677959  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:50.678006  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:50.677936  153457 retry.go:31] will retry after 1.214419359s: waiting for domain to come up
	I0903 23:31:51.893965  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:51.894608  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:51.894634  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:51.894575  153457 retry.go:31] will retry after 1.804323095s: waiting for domain to come up
	I0903 23:31:50.221051  153807 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0903 23:31:51.085320  153807 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0903 23:31:51.085521  153807 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/NoKubernetes-561956/config.json ...
	I0903 23:31:51.085879  153807 start.go:360] acquireMachinesLock for NoKubernetes-561956: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:31:53.701608  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:53.702121  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:53.702190  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:53.702120  153457 retry.go:31] will retry after 2.138037739s: waiting for domain to come up
	I0903 23:31:55.841974  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:55.842525  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:55.842564  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:55.842493  153457 retry.go:31] will retry after 2.688275265s: waiting for domain to come up
	I0903 23:31:58.532764  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:31:58.533362  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:31:58.533417  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:31:58.533314  153457 retry.go:31] will retry after 2.373595638s: waiting for domain to come up
	I0903 23:32:00.909120  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:00.909834  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find current IP address of domain auto-380966 in network mk-auto-380966
	I0903 23:32:00.909876  153025 main.go:141] libmachine: (auto-380966) DBG | I0903 23:32:00.909800  153457 retry.go:31] will retry after 4.320247516s: waiting for domain to come up
	I0903 23:32:07.046784  153376 start.go:364] duration metric: took 26.177637999s to acquireMachinesLock for "kubernetes-upgrade-938492"
	I0903 23:32:07.046846  153376 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:32:07.046860  153376 fix.go:54] fixHost starting: 
	I0903 23:32:07.047276  153376 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:32:07.047312  153376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:32:07.068135  153376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0903 23:32:07.068582  153376 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:32:07.069050  153376 main.go:141] libmachine: Using API Version  1
	I0903 23:32:07.069092  153376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:32:07.069536  153376 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:32:07.069739  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:07.069876  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetState
	I0903 23:32:07.071543  153376 fix.go:112] recreateIfNeeded on kubernetes-upgrade-938492: state=Running err=<nil>
	W0903 23:32:07.071561  153376 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:32:05.231432  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.231990  153025 main.go:141] libmachine: (auto-380966) found domain IP: 192.168.61.89
	I0903 23:32:05.232017  153025 main.go:141] libmachine: (auto-380966) reserving static IP address...
	I0903 23:32:05.232027  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has current primary IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.232397  153025 main.go:141] libmachine: (auto-380966) DBG | unable to find host DHCP lease matching {name: "auto-380966", mac: "52:54:00:18:46:db", ip: "192.168.61.89"} in network mk-auto-380966
	I0903 23:32:05.311345  153025 main.go:141] libmachine: (auto-380966) DBG | Getting to WaitForSSH function...
	I0903 23:32:05.311377  153025 main.go:141] libmachine: (auto-380966) reserved static IP address 192.168.61.89 for domain auto-380966
	I0903 23:32:05.311391  153025 main.go:141] libmachine: (auto-380966) waiting for SSH...
	I0903 23:32:05.314349  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.314745  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.314778  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.314962  153025 main.go:141] libmachine: (auto-380966) DBG | Using SSH client type: external
	I0903 23:32:05.314985  153025 main.go:141] libmachine: (auto-380966) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa (-rw-------)
	I0903 23:32:05.315004  153025 main.go:141] libmachine: (auto-380966) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:32:05.315021  153025 main.go:141] libmachine: (auto-380966) DBG | About to run SSH command:
	I0903 23:32:05.315034  153025 main.go:141] libmachine: (auto-380966) DBG | exit 0
	I0903 23:32:05.446030  153025 main.go:141] libmachine: (auto-380966) DBG | SSH cmd err, output: <nil>: 
	I0903 23:32:05.446372  153025 main.go:141] libmachine: (auto-380966) KVM machine creation complete
	I0903 23:32:05.446740  153025 main.go:141] libmachine: (auto-380966) Calling .GetConfigRaw
	I0903 23:32:05.447424  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:05.447610  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:05.447722  153025 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0903 23:32:05.447732  153025 main.go:141] libmachine: (auto-380966) Calling .GetState
	I0903 23:32:05.448928  153025 main.go:141] libmachine: Detecting operating system of created instance...
	I0903 23:32:05.448940  153025 main.go:141] libmachine: Waiting for SSH to be available...
	I0903 23:32:05.448946  153025 main.go:141] libmachine: Getting to WaitForSSH function...
	I0903 23:32:05.448951  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:05.451111  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.451467  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.451490  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.451620  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:05.451836  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.451987  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.452112  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:05.452280  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:05.452579  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:05.452595  153025 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0903 23:32:05.565233  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:32:05.565263  153025 main.go:141] libmachine: Detecting the provisioner...
	I0903 23:32:05.565273  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:05.568604  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.569041  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.569068  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.569296  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:05.569556  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.569734  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.569886  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:05.570055  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:05.570278  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:05.570290  153025 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0903 23:32:05.686456  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0903 23:32:05.686521  153025 main.go:141] libmachine: found compatible host: buildroot
	I0903 23:32:05.686531  153025 main.go:141] libmachine: Provisioning with buildroot...
	I0903 23:32:05.686549  153025 main.go:141] libmachine: (auto-380966) Calling .GetMachineName
	I0903 23:32:05.686832  153025 buildroot.go:166] provisioning hostname "auto-380966"
	I0903 23:32:05.686852  153025 main.go:141] libmachine: (auto-380966) Calling .GetMachineName
	I0903 23:32:05.687062  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:05.689729  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.690069  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.690097  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.690269  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:05.690456  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.690607  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.690731  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:05.690908  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:05.691123  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:05.691139  153025 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-380966 && echo "auto-380966" | sudo tee /etc/hostname
	I0903 23:32:05.827778  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-380966
	
	I0903 23:32:05.827811  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:05.830794  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.831152  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.831179  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.831375  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:05.831549  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.831731  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:05.831890  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:05.832026  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:05.832230  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:05.832246  153025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-380966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-380966/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-380966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:32:05.955844  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:32:05.955890  153025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:32:05.955924  153025 buildroot.go:174] setting up certificates
	I0903 23:32:05.955939  153025 provision.go:84] configureAuth start
	I0903 23:32:05.955957  153025 main.go:141] libmachine: (auto-380966) Calling .GetMachineName
	I0903 23:32:05.956265  153025 main.go:141] libmachine: (auto-380966) Calling .GetIP
	I0903 23:32:05.959174  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.959577  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.959609  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.959784  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:05.962103  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.962521  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:05.962553  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:05.962726  153025 provision.go:143] copyHostCerts
	I0903 23:32:05.962797  153025 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:32:05.962820  153025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:32:05.962890  153025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:32:05.963009  153025 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:32:05.963020  153025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:32:05.963052  153025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:32:05.963134  153025 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:32:05.963145  153025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:32:05.963174  153025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:32:05.963245  153025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.auto-380966 san=[127.0.0.1 192.168.61.89 auto-380966 localhost minikube]
	I0903 23:32:06.341347  153025 provision.go:177] copyRemoteCerts
	I0903 23:32:06.341439  153025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:32:06.341471  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:06.344134  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.344472  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.344500  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.344649  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:06.344890  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.345043  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:06.345157  153025 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa Username:docker}
	I0903 23:32:06.433210  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:32:06.468137  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0903 23:32:06.499555  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:32:06.526576  153025 provision.go:87] duration metric: took 570.621221ms to configureAuth
	I0903 23:32:06.526607  153025 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:32:06.526772  153025 config.go:182] Loaded profile config "auto-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:32:06.526884  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:06.529743  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.530062  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.530083  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.530290  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:06.530510  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.530758  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.530933  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:06.531118  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:06.531331  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:06.531348  153025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:32:06.777204  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:32:06.777240  153025 main.go:141] libmachine: Checking connection to Docker...
	I0903 23:32:06.777251  153025 main.go:141] libmachine: (auto-380966) Calling .GetURL
	I0903 23:32:06.778453  153025 main.go:141] libmachine: (auto-380966) DBG | using libvirt version 6000000
	I0903 23:32:06.780474  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.780705  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.780730  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.780875  153025 main.go:141] libmachine: Docker is up and running!
	I0903 23:32:06.780889  153025 main.go:141] libmachine: Reticulating splines...
	I0903 23:32:06.780898  153025 client.go:171] duration metric: took 21.97267922s to LocalClient.Create
	I0903 23:32:06.780923  153025 start.go:167] duration metric: took 21.972749518s to libmachine.API.Create "auto-380966"
	I0903 23:32:06.780936  153025 start.go:293] postStartSetup for "auto-380966" (driver="kvm2")
	I0903 23:32:06.780946  153025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:32:06.780964  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:06.781226  153025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:32:06.781266  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:06.783885  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.784265  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.784290  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.784402  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:06.784620  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.784811  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:06.784998  153025 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa Username:docker}
	I0903 23:32:06.873323  153025 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:32:06.878190  153025 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:32:06.878221  153025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:32:06.878308  153025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:32:06.878389  153025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:32:06.878509  153025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:32:06.890188  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:32:06.917794  153025 start.go:296] duration metric: took 136.843593ms for postStartSetup
	I0903 23:32:06.917850  153025 main.go:141] libmachine: (auto-380966) Calling .GetConfigRaw
	I0903 23:32:06.918445  153025 main.go:141] libmachine: (auto-380966) Calling .GetIP
	I0903 23:32:06.920946  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.921338  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.921364  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.921653  153025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/config.json ...
	I0903 23:32:06.921832  153025 start.go:128] duration metric: took 22.134823641s to createHost
	I0903 23:32:06.921854  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:06.924371  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.924722  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:06.924748  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:06.924884  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:06.925060  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.925172  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:06.925333  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:06.925480  153025 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:06.925736  153025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.89 22 <nil> <nil>}
	I0903 23:32:06.925747  153025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:32:07.046590  153025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942327.016763235
	
	I0903 23:32:07.046622  153025 fix.go:216] guest clock: 1756942327.016763235
	I0903 23:32:07.046635  153025 fix.go:229] Guest: 2025-09-03 23:32:07.016763235 +0000 UTC Remote: 2025-09-03 23:32:06.921843493 +0000 UTC m=+54.638532272 (delta=94.919742ms)
	I0903 23:32:07.046670  153025 fix.go:200] guest clock delta is within tolerance: 94.919742ms
	I0903 23:32:07.046680  153025 start.go:83] releasing machines lock for "auto-380966", held for 22.259828957s
	I0903 23:32:07.046714  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:07.047015  153025 main.go:141] libmachine: (auto-380966) Calling .GetIP
	I0903 23:32:07.050123  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.050471  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:07.050501  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.050642  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:07.051277  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:07.051448  153025 main.go:141] libmachine: (auto-380966) Calling .DriverName
	I0903 23:32:07.051539  153025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:32:07.051599  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:07.051661  153025 ssh_runner.go:195] Run: cat /version.json
	I0903 23:32:07.051681  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHHostname
	I0903 23:32:07.054419  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.054735  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.054808  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:07.054831  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.054978  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:07.055105  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:07.055132  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:07.055185  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:07.055420  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:07.055423  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHPort
	I0903 23:32:07.055634  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHKeyPath
	I0903 23:32:07.055633  153025 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa Username:docker}
	I0903 23:32:07.055770  153025 main.go:141] libmachine: (auto-380966) Calling .GetSSHUsername
	I0903 23:32:07.055905  153025 sshutil.go:53] new ssh client: &{IP:192.168.61.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/auto-380966/id_rsa Username:docker}
	I0903 23:32:07.182794  153025 ssh_runner.go:195] Run: systemctl --version
	I0903 23:32:07.188771  153025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:32:07.349109  153025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:32:07.355480  153025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:32:07.355559  153025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:32:07.377295  153025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:32:07.377319  153025 start.go:495] detecting cgroup driver to use...
	I0903 23:32:07.377417  153025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:32:07.396792  153025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:32:07.414218  153025 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:32:07.414294  153025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:32:07.429878  153025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:32:07.445329  153025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:32:07.605958  153025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:32:07.757855  153025 docker.go:234] disabling docker service ...
	I0903 23:32:07.757917  153025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:32:07.773202  153025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:32:07.788228  153025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:32:07.996939  153025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:32:08.148203  153025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:32:08.164053  153025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:32:08.184578  153025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:32:08.184659  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.196163  153025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:32:08.196224  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.206896  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.217857  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.228983  153025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:32:08.241054  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.251779  153025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.269720  153025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:08.280513  153025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:32:08.290481  153025 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:32:08.290553  153025 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:32:08.308966  153025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:32:08.320009  153025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:32:08.459184  153025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:32:08.575249  153025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:32:08.575343  153025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:32:08.580210  153025 start.go:563] Will wait 60s for crictl version
	I0903 23:32:08.580260  153025 ssh_runner.go:195] Run: which crictl
	I0903 23:32:08.583861  153025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:32:08.623206  153025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:32:08.623300  153025 ssh_runner.go:195] Run: crio --version
	I0903 23:32:08.651091  153025 ssh_runner.go:195] Run: crio --version
	I0903 23:32:08.679254  153025 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:32:07.073587  153376 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-938492" VM ...
	I0903 23:32:07.073627  153376 machine.go:93] provisionDockerMachine start ...
	I0903 23:32:07.073643  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:07.073845  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.076563  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.076987  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.077022  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.077168  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:07.077349  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.077516  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.077671  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:07.077928  153376 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:07.078158  153376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:32:07.078171  153376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:32:07.194664  153376 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-938492
	
	I0903 23:32:07.194696  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:32:07.194949  153376 buildroot.go:166] provisioning hostname "kubernetes-upgrade-938492"
	I0903 23:32:07.194978  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:32:07.195254  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.198760  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.199248  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.199282  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.199557  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:07.199779  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.199955  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.200134  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:07.200374  153376 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:07.200579  153376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:32:07.200592  153376 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-938492 && echo "kubernetes-upgrade-938492" | sudo tee /etc/hostname
	I0903 23:32:07.335318  153376 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-938492
	
	I0903 23:32:07.335348  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.337776  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.338212  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.338243  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.338543  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:07.338761  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.338965  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.339114  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:07.339322  153376 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:07.339523  153376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:32:07.339540  153376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-938492' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-938492/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-938492' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:32:07.462594  153376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:32:07.462625  153376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:32:07.462647  153376 buildroot.go:174] setting up certificates
	I0903 23:32:07.462654  153376 provision.go:84] configureAuth start
	I0903 23:32:07.462663  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetMachineName
	I0903 23:32:07.462958  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:32:07.465872  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.466378  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.466415  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.466523  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.469170  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.469628  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.469662  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.469880  153376 provision.go:143] copyHostCerts
	I0903 23:32:07.469956  153376 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:32:07.469980  153376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:32:07.470051  153376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:32:07.470177  153376 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:32:07.470193  153376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:32:07.470227  153376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:32:07.470335  153376 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:32:07.470349  153376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:32:07.470382  153376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:32:07.470461  153376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-938492 san=[127.0.0.1 192.168.50.53 kubernetes-upgrade-938492 localhost minikube]
	I0903 23:32:07.635236  153376 provision.go:177] copyRemoteCerts
	I0903 23:32:07.635347  153376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:32:07.635380  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.638657  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.639058  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.639085  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.639337  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:07.639569  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.639757  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:07.639923  153376 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:32:07.734200  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0903 23:32:07.765499  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:32:07.797727  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:32:07.828714  153376 provision.go:87] duration metric: took 366.043601ms to configureAuth
	I0903 23:32:07.828752  153376 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:32:07.828973  153376 config.go:182] Loaded profile config "kubernetes-upgrade-938492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:32:07.829083  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:07.832054  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.832505  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:07.832543  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:07.832667  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:07.832888  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.833080  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:07.833276  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:07.833480  153376 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:07.833748  153376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:32:07.833772  153376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:32:08.680394  153025 main.go:141] libmachine: (auto-380966) Calling .GetIP
	I0903 23:32:08.682937  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:08.683256  153025 main.go:141] libmachine: (auto-380966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:46:db", ip: ""} in network mk-auto-380966: {Iface:virbr3 ExpiryTime:2025-09-04 00:31:59 +0000 UTC Type:0 Mac:52:54:00:18:46:db Iaid: IPaddr:192.168.61.89 Prefix:24 Hostname:auto-380966 Clientid:01:52:54:00:18:46:db}
	I0903 23:32:08.683284  153025 main.go:141] libmachine: (auto-380966) DBG | domain auto-380966 has defined IP address 192.168.61.89 and MAC address 52:54:00:18:46:db in network mk-auto-380966
	I0903 23:32:08.683526  153025 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:32:08.687765  153025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:32:08.702395  153025 kubeadm.go:875] updating cluster {Name:auto-380966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:auto-380966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:32:08.702497  153025 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:32:08.702558  153025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:32:08.735956  153025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:32:08.736031  153025 ssh_runner.go:195] Run: which lz4
	I0903 23:32:08.739728  153025 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:32:08.744077  153025 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:32:08.744115  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:32:10.113962  153025 crio.go:462] duration metric: took 1.374279998s to copy over tarball
	I0903 23:32:10.114079  153025 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:32:11.691516  153025 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.577397432s)
	I0903 23:32:11.691544  153025 crio.go:469] duration metric: took 1.577549375s to extract the tarball
	I0903 23:32:11.691552  153025 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:32:11.734130  153025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:32:11.780029  153025 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:32:11.780055  153025 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:32:11.780064  153025 kubeadm.go:926] updating node { 192.168.61.89 8443 v1.34.0 crio true true} ...
	I0903 23:32:11.780157  153025 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-380966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:auto-380966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:32:11.780221  153025 ssh_runner.go:195] Run: crio config
	I0903 23:32:11.826014  153025 cni.go:84] Creating CNI manager for ""
	I0903 23:32:11.826042  153025 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:32:11.826056  153025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:32:11.826083  153025 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.89 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-380966 NodeName:auto-380966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:32:11.826245  153025 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-380966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:32:11.826337  153025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:32:11.837826  153025 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:32:11.837904  153025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:32:11.848249  153025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0903 23:32:11.866768  153025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:32:11.884906  153025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0903 23:32:11.902941  153025 ssh_runner.go:195] Run: grep 192.168.61.89	control-plane.minikube.internal$ /etc/hosts
	I0903 23:32:11.906789  153025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:32:11.919894  153025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:32:12.052280  153025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:32:12.070252  153025 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966 for IP: 192.168.61.89
	I0903 23:32:12.070278  153025 certs.go:194] generating shared ca certs ...
	I0903 23:32:12.070295  153025 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.070503  153025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:32:12.070564  153025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:32:12.070578  153025 certs.go:256] generating profile certs ...
	I0903 23:32:12.070652  153025 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.key
	I0903 23:32:12.070683  153025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt with IP's: []
	I0903 23:32:12.395925  153025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt ...
	I0903 23:32:12.395957  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: {Name:mkd1610570e3566fbca4235915e2db051e1575b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.396160  153025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.key ...
	I0903 23:32:12.396177  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.key: {Name:mk66c43dc2397789fbda8d471e8fc813c96e5094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.396294  153025 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key.b4f06fd0
	I0903 23:32:12.396319  153025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt.b4f06fd0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.89]
	I0903 23:32:12.554462  153025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt.b4f06fd0 ...
	I0903 23:32:12.554501  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt.b4f06fd0: {Name:mk7fbcfc84cf661a469ead2666c2cbc310f9091c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.554737  153025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key.b4f06fd0 ...
	I0903 23:32:12.554764  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key.b4f06fd0: {Name:mkba5e54138c2296f5828e9a9f978b402d52c52d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.554888  153025 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt.b4f06fd0 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt
	I0903 23:32:12.554979  153025 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key.b4f06fd0 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key
	I0903 23:32:12.555047  153025 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.key
	I0903 23:32:12.555062  153025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.crt with IP's: []
	I0903 23:32:12.889297  153025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.crt ...
	I0903 23:32:12.889330  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.crt: {Name:mk870a5377e06288e5c0b5ef13be25818b43cab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.889523  153025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.key ...
	I0903 23:32:12.889539  153025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.key: {Name:mkbafe4c6afc2eede0871ce0bbb32a111f48ba68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:12.889714  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:32:12.889748  153025 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:32:12.889758  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:32:12.889781  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:32:12.889803  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:32:12.889825  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:32:12.889860  153025 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:32:12.890441  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:32:12.919288  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:32:12.951490  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:32:12.978555  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:32:13.007893  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0903 23:32:13.037103  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:32:13.065739  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:32:13.093760  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:32:13.123633  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:32:13.152093  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:32:13.179486  153025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:32:13.211136  153025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:32:13.233497  153025 ssh_runner.go:195] Run: openssl version
	I0903 23:32:13.240113  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:32:13.254874  153025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:13.259857  153025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:13.259926  153025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:13.269240  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:32:13.285538  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:32:13.300474  153025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:32:13.306299  153025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:32:13.306373  153025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:32:13.313350  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:32:13.325558  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:32:13.337935  153025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:32:13.342862  153025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:32:13.342922  153025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:32:13.350119  153025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:32:13.362568  153025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:32:13.367719  153025 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:32:13.367782  153025 kubeadm.go:392] StartCluster: {Name:auto-380966 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Clu
sterName:auto-380966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.89 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:32:13.367872  153025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:32:13.367923  153025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:32:13.408594  153025 cri.go:89] found id: ""
	I0903 23:32:13.408679  153025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:32:13.420099  153025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:32:13.438089  153025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:32:13.454624  153025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:32:13.454649  153025 kubeadm.go:157] found existing configuration files:
	
	I0903 23:32:13.454706  153025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:32:13.465618  153025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:32:13.465696  153025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:32:13.476933  153025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:32:13.487128  153025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:32:13.487194  153025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:32:13.498159  153025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:32:13.509253  153025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:32:13.509328  153025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:32:13.522348  153025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:32:13.535583  153025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:32:13.535661  153025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:32:13.551050  153025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:32:13.630668  153025 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 23:32:13.630768  153025 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:32:13.759175  153025 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:32:13.759353  153025 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:32:13.759495  153025 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 23:32:13.771272  153025 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:32:14.203904  153807 start.go:364] duration metric: took 23.11798916s to acquireMachinesLock for "NoKubernetes-561956"
	I0903 23:32:14.203964  153807 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:32:14.203984  153807 fix.go:54] fixHost starting: 
	I0903 23:32:14.204446  153807 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:32:14.204480  153807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:32:14.223252  153807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I0903 23:32:14.223853  153807 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:32:14.224409  153807 main.go:141] libmachine: Using API Version  1
	I0903 23:32:14.224425  153807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:32:14.224743  153807 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:32:14.224923  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	I0903 23:32:14.225079  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .GetState
	I0903 23:32:14.226766  153807 fix.go:112] recreateIfNeeded on NoKubernetes-561956: state=Stopped err=<nil>
	I0903 23:32:14.226801  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	W0903 23:32:14.226966  153807 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:32:14.381783  153807 out.go:252] * Restarting existing kvm2 VM for "NoKubernetes-561956" ...
	I0903 23:32:14.381854  153807 main.go:141] libmachine: (NoKubernetes-561956) Calling .Start
	I0903 23:32:14.382195  153807 main.go:141] libmachine: (NoKubernetes-561956) starting domain...
	I0903 23:32:14.382215  153807 main.go:141] libmachine: (NoKubernetes-561956) ensuring networks are active...
	I0903 23:32:14.383262  153807 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network default is active
	I0903 23:32:14.383658  153807 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network mk-NoKubernetes-561956 is active
	I0903 23:32:14.384089  153807 main.go:141] libmachine: (NoKubernetes-561956) getting domain XML...
	I0903 23:32:14.384963  153807 main.go:141] libmachine: (NoKubernetes-561956) creating domain...
	I0903 23:32:13.937621  153376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:32:13.937658  153376 machine.go:96] duration metric: took 6.864020668s to provisionDockerMachine
	I0903 23:32:13.937672  153376 start.go:293] postStartSetup for "kubernetes-upgrade-938492" (driver="kvm2")
	I0903 23:32:13.937685  153376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:32:13.937713  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:13.938066  153376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:32:13.938107  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:13.941081  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:13.941482  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:13.941510  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:13.941661  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:13.941873  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:13.942022  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:13.942205  153376 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:32:14.032413  153376 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:32:14.037616  153376 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:32:14.037646  153376 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:32:14.037725  153376 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:32:14.037841  153376 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:32:14.037965  153376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:32:14.051442  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:32:14.085274  153376 start.go:296] duration metric: took 147.584418ms for postStartSetup
	I0903 23:32:14.085330  153376 fix.go:56] duration metric: took 7.038470512s for fixHost
	I0903 23:32:14.085355  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:14.088444  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.088835  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:14.088870  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.089102  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:14.089405  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:14.089573  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:14.089758  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:14.089915  153376 main.go:141] libmachine: Using SSH client type: native
	I0903 23:32:14.090115  153376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.50.53 22 <nil> <nil>}
	I0903 23:32:14.090126  153376 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:32:14.203709  153376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942334.200546294
	
	I0903 23:32:14.203739  153376 fix.go:216] guest clock: 1756942334.200546294
	I0903 23:32:14.203750  153376 fix.go:229] Guest: 2025-09-03 23:32:14.200546294 +0000 UTC Remote: 2025-09-03 23:32:14.085334229 +0000 UTC m=+33.379975371 (delta=115.212065ms)
	I0903 23:32:14.203778  153376 fix.go:200] guest clock delta is within tolerance: 115.212065ms
	I0903 23:32:14.203792  153376 start.go:83] releasing machines lock for "kubernetes-upgrade-938492", held for 7.156965932s
	I0903 23:32:14.203825  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:14.204197  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:32:14.207571  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.207974  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:14.208008  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.208269  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:14.208894  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:14.209083  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .DriverName
	I0903 23:32:14.209196  153376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:32:14.209273  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:14.209337  153376 ssh_runner.go:195] Run: cat /version.json
	I0903 23:32:14.209367  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHHostname
	I0903 23:32:14.212141  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.212432  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.212723  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:14.212749  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.212847  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:14.212878  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:14.212884  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:14.213054  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:14.213073  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHPort
	I0903 23:32:14.213252  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:14.213260  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHKeyPath
	I0903 23:32:14.213475  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetSSHUsername
	I0903 23:32:14.213491  153376 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:32:14.213630  153376 sshutil.go:53] new ssh client: &{IP:192.168.50.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/kubernetes-upgrade-938492/id_rsa Username:docker}
	I0903 23:32:14.329041  153376 ssh_runner.go:195] Run: systemctl --version
	I0903 23:32:14.335608  153376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:32:14.484993  153376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:32:14.494517  153376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:32:14.494614  153376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:32:14.505523  153376 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0903 23:32:14.505553  153376 start.go:495] detecting cgroup driver to use...
	I0903 23:32:14.505628  153376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:32:14.526849  153376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:32:14.544686  153376 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:32:14.544762  153376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:32:14.567444  153376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:32:14.582768  153376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:32:14.759244  153376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:32:14.963255  153376 docker.go:234] disabling docker service ...
	I0903 23:32:14.963339  153376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:32:14.996251  153376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:32:15.015532  153376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:32:15.219614  153376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:32:15.440139  153376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:32:15.457610  153376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:32:15.503684  153376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:32:15.503772  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.519658  153376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:32:15.519745  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.535254  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.547953  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.560014  153376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:32:15.571929  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.584248  153376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.596329  153376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:32:15.609210  153376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:32:15.625725  153376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:32:15.642077  153376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:32:13.841023  153025 out.go:252]   - Generating certificates and keys ...
	I0903 23:32:13.841181  153025 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:32:13.841343  153025 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:32:14.233831  153025 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:32:14.617372  153025 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:32:14.851249  153025 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:32:15.178922  153025 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:32:15.481820  153025 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:32:15.482045  153025 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-380966 localhost] and IPs [192.168.61.89 127.0.0.1 ::1]
	I0903 23:32:15.735016  153025 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:32:15.735310  153025 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-380966 localhost] and IPs [192.168.61.89 127.0.0.1 ::1]
	I0903 23:32:15.959716  153025 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:32:16.298151  153025 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:32:16.595934  153025 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:32:16.596039  153025 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:32:17.302392  153025 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:32:15.830885  153376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:32:18.256695  153376 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.425764572s)
	I0903 23:32:18.256740  153376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:32:18.256805  153376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:32:18.262615  153376 start.go:563] Will wait 60s for crictl version
	I0903 23:32:18.262689  153376 ssh_runner.go:195] Run: which crictl
	I0903 23:32:18.266749  153376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:32:18.309922  153376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:32:18.310016  153376 ssh_runner.go:195] Run: crio --version
	I0903 23:32:18.349245  153376 ssh_runner.go:195] Run: crio --version
	I0903 23:32:18.380903  153376 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:32:17.536171  153025 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 23:32:17.816431  153025 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:32:18.278748  153025 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:32:18.520712  153025 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:32:18.521877  153025 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:32:18.526799  153025 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:32:16.352816  153807 main.go:141] libmachine: (NoKubernetes-561956) waiting for IP...
	I0903 23:32:16.353657  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:16.354151  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:16.354237  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:16.354154  154000 retry.go:31] will retry after 220.919323ms: waiting for domain to come up
	I0903 23:32:16.576855  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:16.577489  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:16.577511  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:16.577435  154000 retry.go:31] will retry after 272.496645ms: waiting for domain to come up
	I0903 23:32:16.852099  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:16.852569  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:16.852598  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:16.852525  154000 retry.go:31] will retry after 430.100281ms: waiting for domain to come up
	I0903 23:32:17.284264  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:17.284809  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:17.284827  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:17.284771  154000 retry.go:31] will retry after 421.724766ms: waiting for domain to come up
	I0903 23:32:17.708625  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:17.709207  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:17.709228  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:17.709151  154000 retry.go:31] will retry after 691.988997ms: waiting for domain to come up
	I0903 23:32:18.403203  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:18.403742  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:18.403806  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:18.403722  154000 retry.go:31] will retry after 771.524157ms: waiting for domain to come up
	I0903 23:32:19.177316  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:19.177783  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:19.177804  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:19.177749  154000 retry.go:31] will retry after 835.004244ms: waiting for domain to come up
	I0903 23:32:20.014052  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:f3:b5:cd in network mk-NoKubernetes-561956
	I0903 23:32:20.014640  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:32:20.014679  153807 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:32:20.014606  154000 retry.go:31] will retry after 1.135889291s: waiting for domain to come up
	I0903 23:32:18.382030  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) Calling .GetIP
	I0903 23:32:18.385263  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:18.385742  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:7e:6a", ip: ""} in network mk-kubernetes-upgrade-938492: {Iface:virbr2 ExpiryTime:2025-09-04 00:31:10 +0000 UTC Type:0 Mac:52:54:00:8d:7e:6a Iaid: IPaddr:192.168.50.53 Prefix:24 Hostname:kubernetes-upgrade-938492 Clientid:01:52:54:00:8d:7e:6a}
	I0903 23:32:18.385785  153376 main.go:141] libmachine: (kubernetes-upgrade-938492) DBG | domain kubernetes-upgrade-938492 has defined IP address 192.168.50.53 and MAC address 52:54:00:8d:7e:6a in network mk-kubernetes-upgrade-938492
	I0903 23:32:18.386037  153376 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0903 23:32:18.390673  153376 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:32:18.390810  153376 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:32:18.390871  153376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:32:18.438595  153376 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:32:18.438625  153376 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:32:18.438695  153376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:32:18.476859  153376 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:32:18.476887  153376 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:32:18.476897  153376 kubeadm.go:926] updating node { 192.168.50.53 8443 v1.34.0 crio true true} ...
	I0903 23:32:18.477014  153376 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-938492 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:32:18.477145  153376 ssh_runner.go:195] Run: crio config
	I0903 23:32:18.527009  153376 cni.go:84] Creating CNI manager for ""
	I0903 23:32:18.527033  153376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:32:18.527047  153376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:32:18.527081  153376 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.53 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-938492 NodeName:kubernetes-upgrade-938492 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:32:18.527282  153376 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-938492"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.53"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.53"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:32:18.527376  153376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:32:18.540942  153376 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:32:18.541014  153376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:32:18.556559  153376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0903 23:32:18.577960  153376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:32:18.602449  153376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0903 23:32:18.626049  153376 ssh_runner.go:195] Run: grep 192.168.50.53	control-plane.minikube.internal$ /etc/hosts
	I0903 23:32:18.631034  153376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:32:18.812522  153376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:32:18.830626  153376 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492 for IP: 192.168.50.53
	I0903 23:32:18.830652  153376 certs.go:194] generating shared ca certs ...
	I0903 23:32:18.830674  153376 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:32:18.830847  153376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:32:18.830895  153376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:32:18.830906  153376 certs.go:256] generating profile certs ...
	I0903 23:32:18.831002  153376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/client.key
	I0903 23:32:18.831067  153376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key.e17636b7
	I0903 23:32:18.831113  153376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key
	I0903 23:32:18.831247  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:32:18.831295  153376 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:32:18.831311  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:32:18.831347  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:32:18.831377  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:32:18.831410  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:32:18.831459  153376 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:32:18.832194  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:32:18.860163  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:32:18.889834  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:32:18.919368  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:32:18.954483  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0903 23:32:18.987388  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:32:19.022071  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:32:19.060473  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kubernetes-upgrade-938492/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:32:19.095219  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:32:19.130997  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:32:19.163536  153376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:32:19.193862  153376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:32:19.213149  153376 ssh_runner.go:195] Run: openssl version
	I0903 23:32:19.220501  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:32:19.237442  153376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:32:19.243802  153376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:32:19.243878  153376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:32:19.253290  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:32:19.269010  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:32:19.286301  153376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:19.293010  153376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:19.293088  153376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:32:19.302370  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:32:19.317159  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:32:19.330846  153376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:32:19.335834  153376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:32:19.335896  153376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:32:19.342845  153376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:32:19.353826  153376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:32:19.358918  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:32:19.365709  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:32:19.374362  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:32:19.381369  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:32:19.390185  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:32:19.398864  153376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:32:19.407708  153376 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-938492 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.0 ClusterName:kubernetes-upgrade-938492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.53 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:32:19.407808  153376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:32:19.407865  153376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:32:19.452965  153376 cri.go:89] found id: "42080da53264c939cb2a8684526413a1f6b7233691abe2694ac83526cec39ca6"
	I0903 23:32:19.452993  153376 cri.go:89] found id: "a87e871225017b51e617c9bdc39d971995ec56af349341101cca1bb49b22fda5"
	I0903 23:32:19.452998  153376 cri.go:89] found id: "43b91390d95df25c63b0591a412c959c2b8ba147cb9772abe44f4b237ac457ad"
	I0903 23:32:19.453002  153376 cri.go:89] found id: "472c38bff12dce0806d0cbe40914899a2e17e139ab5b5e0663d60c8dd4f97074"
	I0903 23:32:19.453007  153376 cri.go:89] found id: "51066f5c69558e38dc64123b43cfe7baa28d61121486fed6d51429000138d7b2"
	I0903 23:32:19.453012  153376 cri.go:89] found id: "1b327ef089715321bdb237e57234b89fb8558535ccd31dc9eadce0c8911c8d90"
	I0903 23:32:19.453016  153376 cri.go:89] found id: "ab83ffbe7ee6b1e60fb9b4ef3deeeaf007bd57bf3ea0cd328e1cd362a6ca0eb7"
	I0903 23:32:19.453020  153376 cri.go:89] found id: "381e7a15b7a3ccf13b0968ab4fcc413b78e40d64bc6b1fe7f6fe12e98967c3df"
	I0903 23:32:19.453023  153376 cri.go:89] found id: ""
	I0903 23:32:19.453093  153376 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-938492 -n kubernetes-upgrade-938492
helpers_test.go:269: (dbg) Run:  kubectl --context kubernetes-upgrade-938492 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-938492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-938492
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-938492: (1.160656461s)
--- FAIL: TestKubernetesUpgrade (416.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-957460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-957460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.918527287s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-957460] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-957460" primary control-plane node in "pause-957460" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-957460" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:29:33.688829  151207 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:29:33.688915  151207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:33.688920  151207 out.go:374] Setting ErrFile to fd 2...
	I0903 23:29:33.688923  151207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:33.689125  151207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:29:33.689708  151207 out.go:368] Setting JSON to false
	I0903 23:29:33.690882  151207 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7918,"bootTime":1756934256,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:29:33.690966  151207 start.go:140] virtualization: kvm guest
	I0903 23:29:33.692665  151207 out.go:179] * [pause-957460] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:29:33.693688  151207 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:29:33.693712  151207 notify.go:220] Checking for updates...
	I0903 23:29:33.695488  151207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:29:33.696526  151207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:29:33.697584  151207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:33.698540  151207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:29:33.699483  151207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:29:33.700981  151207 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:33.701419  151207 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:29:33.701507  151207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:29:33.718298  151207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0903 23:29:33.718754  151207 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:29:33.719400  151207 main.go:141] libmachine: Using API Version  1
	I0903 23:29:33.719434  151207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:29:33.719843  151207 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:29:33.720084  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:33.720371  151207 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:29:33.720874  151207 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:29:33.720928  151207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:29:33.736550  151207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0903 23:29:33.737072  151207 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:29:33.737566  151207 main.go:141] libmachine: Using API Version  1
	I0903 23:29:33.737589  151207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:29:33.737958  151207 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:29:33.738178  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:33.777793  151207 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:29:33.779013  151207 start.go:304] selected driver: kvm2
	I0903 23:29:33.779028  151207 start.go:918] validating driver "kvm2" against &{Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.0 ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:33.779152  151207 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:29:33.779471  151207 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:33.779566  151207 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:29:33.804245  151207 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:29:33.805245  151207 cni.go:84] Creating CNI manager for ""
	I0903 23:29:33.805298  151207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:33.805372  151207 start.go:348] cluster config:
	{Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-957460 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:33.805587  151207 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:33.807686  151207 out.go:179] * Starting "pause-957460" primary control-plane node in "pause-957460" cluster
	I0903 23:29:33.808728  151207 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:33.808775  151207 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:29:33.808791  151207 cache.go:58] Caching tarball of preloaded images
	I0903 23:29:33.808868  151207 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:29:33.808879  151207 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:29:33.809028  151207 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/config.json ...
	I0903 23:29:33.809296  151207 start.go:360] acquireMachinesLock for pause-957460: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:29:35.296314  151207 start.go:364] duration metric: took 1.486981445s to acquireMachinesLock for "pause-957460"
	I0903 23:29:35.296400  151207 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:29:35.296409  151207 fix.go:54] fixHost starting: 
	I0903 23:29:35.296882  151207 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:29:35.296923  151207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:29:35.315101  151207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I0903 23:29:35.315577  151207 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:29:35.316101  151207 main.go:141] libmachine: Using API Version  1
	I0903 23:29:35.316132  151207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:29:35.316475  151207 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:29:35.316674  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:35.316813  151207 main.go:141] libmachine: (pause-957460) Calling .GetState
	I0903 23:29:35.318280  151207 fix.go:112] recreateIfNeeded on pause-957460: state=Running err=<nil>
	W0903 23:29:35.318303  151207 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:29:35.320068  151207 out.go:252] * Updating the running kvm2 "pause-957460" VM ...
	I0903 23:29:35.320093  151207 machine.go:93] provisionDockerMachine start ...
	I0903 23:29:35.320104  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:35.320298  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.322936  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323335  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.323360  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323507  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.323672  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.323905  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.324050  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.324227  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.324516  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.324531  151207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:29:35.438588  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.438635  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.438889  151207 buildroot.go:166] provisioning hostname "pause-957460"
	I0903 23:29:35.438917  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.439115  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.442456  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.442962  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.442995  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.443174  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.443378  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443535  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443677  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.443850  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.444144  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.444166  151207 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-957460 && echo "pause-957460" | sudo tee /etc/hostname
	I0903 23:29:35.573886  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.573920  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.576696  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577038  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.577066  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577228  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.577436  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577619  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577790  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.577973  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.578213  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.578230  151207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-957460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-957460/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-957460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:29:35.694230  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:29:35.694259  151207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:29:35.694283  151207 buildroot.go:174] setting up certificates
	I0903 23:29:35.694293  151207 provision.go:84] configureAuth start
	I0903 23:29:35.694306  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.694577  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:35.697672  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698086  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.698117  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698311  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.701203  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701549  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.701579  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701721  151207 provision.go:143] copyHostCerts
	I0903 23:29:35.701782  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:29:35.701805  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:29:35.701858  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:29:35.701943  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:29:35.701951  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:29:35.701970  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:29:35.702034  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:29:35.702041  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:29:35.702057  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:29:35.702102  151207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.pause-957460 san=[127.0.0.1 192.168.39.90 localhost minikube pause-957460]
	I0903 23:29:36.149133  151207 provision.go:177] copyRemoteCerts
	I0903 23:29:36.149198  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:29:36.149231  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.152291  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.152816  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.152856  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.153010  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.153260  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.153486  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.153734  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:36.250599  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:29:36.281149  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0903 23:29:36.316873  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:29:36.349796  151207 provision.go:87] duration metric: took 655.486761ms to configureAuth
	I0903 23:29:36.349828  151207 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:29:36.350111  151207 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:36.350220  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.354817  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355255  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.355286  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355529  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.355726  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.355907  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.356133  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.356322  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:36.356592  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:36.356619  151207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:29:41.895391  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:29:41.895425  151207 machine.go:96] duration metric: took 6.57532365s to provisionDockerMachine
	I0903 23:29:41.895437  151207 start.go:293] postStartSetup for "pause-957460" (driver="kvm2")
	I0903 23:29:41.895449  151207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:29:41.895490  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:41.895847  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:29:41.895879  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:41.898901  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899360  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:41.899389  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899548  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:41.899744  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:41.899932  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:41.900084  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:41.983185  151207 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:29:41.988001  151207 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:29:41.988025  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:29:41.988098  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:29:41.988190  151207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:29:41.988294  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:29:42.000923  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:42.030779  151207 start.go:296] duration metric: took 135.327248ms for postStartSetup
	I0903 23:29:42.030820  151207 fix.go:56] duration metric: took 6.734411905s for fixHost
	I0903 23:29:42.030840  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.033700  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034091  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.034119  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034309  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.034516  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034674  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034876  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.035060  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:42.035271  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:42.035285  151207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:29:42.138745  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942182.132151632
	
	I0903 23:29:42.138781  151207 fix.go:216] guest clock: 1756942182.132151632
	I0903 23:29:42.138792  151207 fix.go:229] Guest: 2025-09-03 23:29:42.132151632 +0000 UTC Remote: 2025-09-03 23:29:42.030823493 +0000 UTC m=+8.383499424 (delta=101.328139ms)
	I0903 23:29:42.138820  151207 fix.go:200] guest clock delta is within tolerance: 101.328139ms
	I0903 23:29:42.138828  151207 start.go:83] releasing machines lock for "pause-957460", held for 6.842450059s
	I0903 23:29:42.138862  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.139187  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:42.142055  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142383  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.142413  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142557  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143061  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143240  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143335  151207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:29:42.143393  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.143425  151207 ssh_runner.go:195] Run: cat /version.json
	I0903 23:29:42.143446  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.146189  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146538  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.146560  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146588  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146748  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.146918  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147051  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.147064  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147076  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.147227  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.147233  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.147373  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147517  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147656  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.228035  151207 ssh_runner.go:195] Run: systemctl --version
	I0903 23:29:42.260654  151207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:29:42.412845  151207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:29:42.422007  151207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:29:42.422075  151207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:29:42.433083  151207 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0903 23:29:42.433112  151207 start.go:495] detecting cgroup driver to use...
	I0903 23:29:42.433177  151207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:29:42.458445  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:29:42.480967  151207 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:29:42.481031  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:29:42.498735  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:29:42.518587  151207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:29:42.721538  151207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:29:42.930794  151207 docker.go:234] disabling docker service ...
	I0903 23:29:42.930878  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:29:42.968840  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:29:42.984510  151207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:29:43.163361  151207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:29:43.333615  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:29:43.355212  151207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:29:43.385327  151207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:29:43.385414  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.400779  151207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:29:43.400847  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.413990  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.429581  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.444858  151207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:29:43.457225  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.472700  151207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.486584  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.498285  151207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:29:43.508096  151207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:29:43.520704  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:43.681356  151207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:29:45.289300  151207 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.607899314s)
	I0903 23:29:45.289340  151207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:29:45.289423  151207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:29:45.295877  151207 start.go:563] Will wait 60s for crictl version
	I0903 23:29:45.295941  151207 ssh_runner.go:195] Run: which crictl
	I0903 23:29:45.300396  151207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:29:45.338415  151207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:29:45.338517  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.376964  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.416935  151207 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:29:45.417919  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:45.421328  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.421808  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:45.421837  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.422075  151207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:29:45.426580  151207 kubeadm.go:875] updating cluster {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:29:45.426697  151207 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:45.426736  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.473814  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.473844  151207 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:29:45.473895  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.520433  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.520461  151207 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:29:45.520472  151207 kubeadm.go:926] updating node { 192.168.39.90 8443 v1.34.0 crio true true} ...
	I0903 23:29:45.520584  151207 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-957460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:29:45.520661  151207 ssh_runner.go:195] Run: crio config
	I0903 23:29:45.572672  151207 cni.go:84] Creating CNI manager for ""
	I0903 23:29:45.572700  151207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:45.572716  151207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:29:45.572747  151207 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-957460 NodeName:pause-957460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:29:45.572933  151207 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-957460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:29:45.573025  151207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:29:45.587509  151207 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:29:45.587583  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:29:45.600483  151207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0903 23:29:45.623892  151207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:29:45.644207  151207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0903 23:29:45.664580  151207 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0903 23:29:45.668791  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:45.845844  151207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:29:45.870895  151207 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460 for IP: 192.168.39.90
	I0903 23:29:45.870920  151207 certs.go:194] generating shared ca certs ...
	I0903 23:29:45.870936  151207 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:45.871121  151207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:29:45.871183  151207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:29:45.871197  151207 certs.go:256] generating profile certs ...
	I0903 23:29:45.871284  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/client.key
	I0903 23:29:45.871344  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key.13718f5a
	I0903 23:29:45.871381  151207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key
	I0903 23:29:45.871484  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:29:45.871510  151207 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:29:45.871520  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:29:45.871541  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:29:45.871565  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:29:45.871602  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:29:45.871661  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:45.872248  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:29:45.905809  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:29:46.013449  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:29:46.065527  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:29:46.126258  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0903 23:29:46.218401  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:29:46.317576  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:29:46.405885  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:29:46.468227  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:29:46.540356  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:29:46.628688  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:29:46.727214  151207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:29:46.777080  151207 ssh_runner.go:195] Run: openssl version
	I0903 23:29:46.796340  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:29:46.821701  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833705  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833779  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.845539  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:29:46.868263  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:29:46.890836  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902613  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.915399  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:29:46.936675  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:29:47.040563  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056605  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.072040  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:29:47.101448  151207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:29:47.111388  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:29:47.125496  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:29:47.142359  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:29:47.157193  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:29:47.169376  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:29:47.180485  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:29:47.191874  151207 kubeadm.go:392] StartCluster: {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:47.192024  151207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:29:47.192086  151207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:29:47.262205  151207 cri.go:89] found id: "10e8e7d7bd3e7aac3752bd071d990274ecb8edc847cf6261efa2c66baf0d994b"
	I0903 23:29:47.262238  151207 cri.go:89] found id: "1b24e6d40da9c16e6c760fabd44817047863d2e5bac5ea60d85bb264100b7c73"
	I0903 23:29:47.262244  151207 cri.go:89] found id: "aa9d14c5b4e3ab0d9200e6db3849d49689c17a66f1418ddc40f9c7abca252cdf"
	I0903 23:29:47.262248  151207 cri.go:89] found id: "2337c9c17a585736200e843c09f9dc0d4ed47cc2d8a8aa8a77f42e9548c11e5e"
	I0903 23:29:47.262252  151207 cri.go:89] found id: "63a5274b0c5bfadea8983b60493d6610cc81c20b75987c71017aafd565687523"
	I0903 23:29:47.262256  151207 cri.go:89] found id: "622bd13cd8cac3d51aa7b0cafd1834f8de52c46ccd69532fe8bf3a6eb4a2e49d"
	I0903 23:29:47.262261  151207 cri.go:89] found id: "bf81fb211da095af4350a78f944ae302c860603d85647c92df059e7bab1bf58b"
	I0903 23:29:47.262266  151207 cri.go:89] found id: "235fbdc3e7ec406c66669bdd536b8030197b6f88152ff1ad09a72dcac8975024"
	I0903 23:29:47.262270  151207 cri.go:89] found id: "188275cc44fc3fba51ff3713eaf778fb8a952b28dbcea50ec838f84764dfebca"
	I0903 23:29:47.262278  151207 cri.go:89] found id: "4897d1fe35dcfa698a0f2777d418a4d07ee29d0345b9b1a7efaea54df6234af0"
	I0903 23:29:47.262282  151207 cri.go:89] found id: ""
	I0903 23:29:47.262343  151207 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-957460 -n pause-957460
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-957460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-957460 logs -n 25: (2.223528022s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-380966 sudo containerd config dump                                                                                                                │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo crio config                                                                                                                           │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ delete  │ -p cilium-380966                                                                                                                                            │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ start   │ -p running-upgrade-210842 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ minikube                  │ jenkins │ v1.26.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:27 UTC │
	│ delete  │ -p offline-crio-911470                                                                                                                                      │ offline-crio-911470       │ jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:26 UTC │
	│ start   │ -p force-systemd-flag-037213 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:27 UTC │
	│ stop    │ stopped-upgrade-924805 stop                                                                                                                                 │ minikube                  │ jenkins │ v1.26.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p running-upgrade-210842 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ ssh     │ force-systemd-flag-037213 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:27 UTC │
	│ delete  │ -p force-systemd-flag-037213                                                                                                                                │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:27 UTC │
	│ start   │ -p force-systemd-env-753758 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-753758  │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-210842 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │                     │
	│ delete  │ -p running-upgrade-210842                                                                                                                                   │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p pause-957460 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ delete  │ -p force-systemd-env-753758                                                                                                                                 │ force-systemd-env-753758  │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p stopped-upgrade-924805 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p cert-expiration-689039 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-689039    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p pause-957460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-924805 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-924805                                                                                                                                   │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p NoKubernetes-561956 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio                                                    │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ start   │ -p NoKubernetes-561956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:29:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:29:37.617565  151427 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:29:37.617837  151427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:37.617847  151427 out.go:374] Setting ErrFile to fd 2...
	I0903 23:29:37.617851  151427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:37.618022  151427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:29:37.618738  151427 out.go:368] Setting JSON to false
	I0903 23:29:37.620242  151427 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7922,"bootTime":1756934256,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:29:37.620328  151427 start.go:140] virtualization: kvm guest
	I0903 23:29:37.621870  151427 out.go:179] * [NoKubernetes-561956] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:29:37.623089  151427 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:29:37.623074  151427 notify.go:220] Checking for updates...
	I0903 23:29:37.625055  151427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:29:37.626094  151427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:29:37.627105  151427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:37.628230  151427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:29:37.629149  151427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:29:37.630648  151427 config.go:182] Loaded profile config "cert-expiration-689039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:37.630810  151427 config.go:182] Loaded profile config "kubernetes-upgrade-938492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:29:37.631008  151427 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:37.631148  151427 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:29:37.677831  151427 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 23:29:37.678837  151427 start.go:304] selected driver: kvm2
	I0903 23:29:37.678857  151427 start.go:918] validating driver "kvm2" against <nil>
	I0903 23:29:37.678872  151427 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:29:37.679951  151427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:37.680041  151427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:29:37.699686  151427 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:29:37.699755  151427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:29:37.700140  151427 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 23:29:37.700182  151427 cni.go:84] Creating CNI manager for ""
	I0903 23:29:37.700250  151427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:37.700265  151427 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 23:29:37.700365  151427 start.go:348] cluster config:
	{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-561956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:37.700556  151427 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:37.702896  151427 out.go:179] * Starting "NoKubernetes-561956" primary control-plane node in "NoKubernetes-561956" cluster
	I0903 23:29:37.040830  150717 main.go:141] libmachine: (cert-expiration-689039) Calling .GetIP
	I0903 23:29:37.225934  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | domain cert-expiration-689039 has defined MAC address 52:54:00:d9:56:92 in network mk-cert-expiration-689039
	I0903 23:29:37.226490  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:56:92", ip: ""} in network mk-cert-expiration-689039: {Iface:virbr4 ExpiryTime:2025-09-04 00:29:26 +0000 UTC Type:0 Mac:52:54:00:d9:56:92 Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:cert-expiration-689039 Clientid:01:52:54:00:d9:56:92}
	I0903 23:29:37.226535  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | domain cert-expiration-689039 has defined IP address 192.168.72.209 and MAC address 52:54:00:d9:56:92 in network mk-cert-expiration-689039
	I0903 23:29:37.226717  150717 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0903 23:29:37.231114  150717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:29:37.245104  150717 kubeadm.go:875] updating cluster {Name:cert-expiration-689039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:29:37.245194  150717 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:37.245232  150717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:37.279041  150717 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:29:37.279095  150717 ssh_runner.go:195] Run: which lz4
	I0903 23:29:37.283067  150717 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:29:37.287290  150717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:29:37.287309  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:29:35.320068  151207 out.go:252] * Updating the running kvm2 "pause-957460" VM ...
	I0903 23:29:35.320093  151207 machine.go:93] provisionDockerMachine start ...
	I0903 23:29:35.320104  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:35.320298  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.322936  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323335  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.323360  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323507  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.323672  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.323905  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.324050  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.324227  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.324516  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.324531  151207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:29:35.438588  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.438635  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.438889  151207 buildroot.go:166] provisioning hostname "pause-957460"
	I0903 23:29:35.438917  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.439115  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.442456  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.442962  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.442995  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.443174  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.443378  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443535  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443677  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.443850  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.444144  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.444166  151207 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-957460 && echo "pause-957460" | sudo tee /etc/hostname
	I0903 23:29:35.573886  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.573920  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.576696  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577038  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.577066  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577228  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.577436  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577619  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577790  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.577973  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.578213  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.578230  151207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-957460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-957460/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-957460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:29:35.694230  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:29:35.694259  151207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:29:35.694283  151207 buildroot.go:174] setting up certificates
	I0903 23:29:35.694293  151207 provision.go:84] configureAuth start
	I0903 23:29:35.694306  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.694577  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:35.697672  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698086  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.698117  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698311  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.701203  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701549  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.701579  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701721  151207 provision.go:143] copyHostCerts
	I0903 23:29:35.701782  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:29:35.701805  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:29:35.701858  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:29:35.701943  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:29:35.701951  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:29:35.701970  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:29:35.702034  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:29:35.702041  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:29:35.702057  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:29:35.702102  151207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.pause-957460 san=[127.0.0.1 192.168.39.90 localhost minikube pause-957460]
	I0903 23:29:36.149133  151207 provision.go:177] copyRemoteCerts
	I0903 23:29:36.149198  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:29:36.149231  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.152291  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.152816  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.152856  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.153010  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.153260  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.153486  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.153734  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:36.250599  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:29:36.281149  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0903 23:29:36.316873  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:29:36.349796  151207 provision.go:87] duration metric: took 655.486761ms to configureAuth
	I0903 23:29:36.349828  151207 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:29:36.350111  151207 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:36.350220  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.354817  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355255  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.355286  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355529  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.355726  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.355907  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.356133  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.356322  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:36.356592  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:36.356619  151207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:29:38.737414  150717 crio.go:462] duration metric: took 1.454386204s to copy over tarball
	I0903 23:29:38.737486  150717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:29:40.217850  150717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.480338058s)
	I0903 23:29:40.217869  150717 crio.go:469] duration metric: took 1.480430934s to extract the tarball
	I0903 23:29:40.217876  150717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:29:40.268917  150717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:40.311990  150717 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:40.312003  150717 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:29:40.312009  150717 kubeadm.go:926] updating node { 192.168.72.209 8443 v1.34.0 crio true true} ...
	I0903 23:29:40.312099  150717 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-689039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:29:40.312160  150717 ssh_runner.go:195] Run: crio config
	I0903 23:29:40.354543  150717 cni.go:84] Creating CNI manager for ""
	I0903 23:29:40.354552  150717 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:40.354563  150717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:29:40.354582  150717 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.209 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-689039 NodeName:cert-expiration-689039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:29:40.354691  150717 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-689039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:29:40.354753  150717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:29:40.365952  150717 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:29:40.366009  150717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:29:40.376758  150717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0903 23:29:40.395582  150717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:29:40.413357  150717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0903 23:29:40.432029  150717 ssh_runner.go:195] Run: grep 192.168.72.209	control-plane.minikube.internal$ /etc/hosts
	I0903 23:29:40.435770  150717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:29:40.448647  150717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:40.581151  150717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:29:40.616093  150717 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039 for IP: 192.168.72.209
	I0903 23:29:40.616109  150717 certs.go:194] generating shared ca certs ...
	I0903 23:29:40.616132  150717 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.616366  150717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:29:40.616422  150717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:29:40.616430  150717 certs.go:256] generating profile certs ...
	I0903 23:29:40.616505  150717 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key
	I0903 23:29:40.616534  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt with IP's: []
	I0903 23:29:40.677306  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt ...
	I0903 23:29:40.677323  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt: {Name:mkf5ecbd814becf066c6e6bb04332cd6714539dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.677511  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key ...
	I0903 23:29:40.677520  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key: {Name:mkc7814d5f13210244d374251f5b47585e9945d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.677597  150717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c
	I0903 23:29:40.677609  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.209]
	I0903 23:29:40.866917  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c ...
	I0903 23:29:40.866933  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c: {Name:mk0d8bf488d245574c04d2605617a7f7e8132bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.867095  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c ...
	I0903 23:29:40.867104  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c: {Name:mk6167336a0e6aa19740165e42279e71f0f8fa9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.867175  150717 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt
	I0903 23:29:40.867265  150717 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key
	I0903 23:29:40.867313  150717 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key
	I0903 23:29:40.867324  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt with IP's: []
	I0903 23:29:41.012621  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt ...
	I0903 23:29:41.012643  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt: {Name:mk3d8edf4f96f3056f32313cfcb531f0e5fc62e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:41.012800  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key ...
	I0903 23:29:41.012808  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key: {Name:mk2afd64f3163274f9651082a04d952689eac296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:41.012967  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:29:41.012996  150717 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:29:41.013002  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:29:41.013022  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:29:41.013040  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:29:41.013057  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:29:41.013089  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:41.013641  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:29:41.044595  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:29:41.070722  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:29:41.096419  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:29:41.122874  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:29:41.149323  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:29:41.175278  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:29:41.201431  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:29:41.227473  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:29:41.254001  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:29:41.286624  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:29:41.322404  150717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:29:41.343505  150717 ssh_runner.go:195] Run: openssl version
	I0903 23:29:41.349535  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:29:41.361277  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.365732  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.365774  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.372331  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:29:41.383491  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:29:41.394964  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.399451  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.399489  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.406076  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:29:41.417678  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:29:41.429470  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.433897  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.433944  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.440408  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:29:41.451765  150717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:29:41.455960  150717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:29:41.456002  150717 kubeadm.go:392] StartCluster: {Name:cert-expiration-689039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:41.456057  150717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:29:41.456115  150717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:29:41.498507  150717 cri.go:89] found id: ""
	I0903 23:29:41.498564  150717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:29:41.510278  150717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:29:41.523326  150717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:29:41.536361  150717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:29:41.536372  150717 kubeadm.go:157] found existing configuration files:
	
	I0903 23:29:41.536426  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:29:41.546666  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:29:41.546713  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:29:41.558154  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:29:41.567679  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:29:41.567734  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:29:41.578000  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:29:41.587798  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:29:41.587839  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:29:41.598371  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:29:41.608203  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:29:41.608249  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:29:41.618331  150717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:29:41.669729  150717 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 23:29:41.669812  150717 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:29:41.779248  150717 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:29:41.779427  150717 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:29:41.779567  150717 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 23:29:41.793087  150717 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:29:37.703939  151427 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:37.703998  151427 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:29:37.704018  151427 cache.go:58] Caching tarball of preloaded images
	I0903 23:29:37.704131  151427 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:29:37.704156  151427 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:29:37.704309  151427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/NoKubernetes-561956/config.json ...
	I0903 23:29:37.704340  151427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/NoKubernetes-561956/config.json: {Name:mka5d765d95b98338c3890877dd6523d7b0bbc4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:37.704551  151427 start.go:360] acquireMachinesLock for NoKubernetes-561956: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:29:42.138928  151427 start.go:364] duration metric: took 4.434338115s to acquireMachinesLock for "NoKubernetes-561956"
	I0903 23:29:42.139009  151427 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-561956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:29:42.139126  151427 start.go:125] createHost starting for "" (driver="kvm2")
	I0903 23:29:42.281042  151427 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:29:42.281329  151427 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:29:42.281409  151427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:29:42.300134  151427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0903 23:29:42.300614  151427 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:29:42.301198  151427 main.go:141] libmachine: Using API Version  1
	I0903 23:29:42.301224  151427 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:29:42.301640  151427 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:29:42.301877  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .GetMachineName
	I0903 23:29:42.302015  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	I0903 23:29:42.302145  151427 start.go:159] libmachine.API.Create for "NoKubernetes-561956" (driver="kvm2")
	I0903 23:29:42.302186  151427 client.go:168] LocalClient.Create starting
	I0903 23:29:42.302224  151427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem
	I0903 23:29:42.302265  151427 main.go:141] libmachine: Decoding PEM data...
	I0903 23:29:42.302288  151427 main.go:141] libmachine: Parsing certificate...
	I0903 23:29:42.302357  151427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem
	I0903 23:29:42.302385  151427 main.go:141] libmachine: Decoding PEM data...
	I0903 23:29:42.302402  151427 main.go:141] libmachine: Parsing certificate...
	I0903 23:29:42.302425  151427 main.go:141] libmachine: Running pre-create checks...
	I0903 23:29:42.302438  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .PreCreateCheck
	I0903 23:29:42.302805  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .GetConfigRaw
	I0903 23:29:42.303329  151427 main.go:141] libmachine: Creating machine...
	I0903 23:29:42.303351  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .Create
	I0903 23:29:42.303475  151427 main.go:141] libmachine: (NoKubernetes-561956) creating KVM machine...
	I0903 23:29:42.303495  151427 main.go:141] libmachine: (NoKubernetes-561956) creating network...
	I0903 23:29:42.304712  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | found existing default KVM network
	I0903 23:29:42.305915  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.305735  151484 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:ac:01} reservation:<nil>}
	I0903 23:29:42.306700  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.306618  151484 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:b1:0b} reservation:<nil>}
	I0903 23:29:42.308206  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.308113  151484 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027cdf0}
	I0903 23:29:42.308235  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | created network xml: 
	I0903 23:29:42.308262  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | <network>
	I0903 23:29:42.308275  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <name>mk-NoKubernetes-561956</name>
	I0903 23:29:42.308287  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <dns enable='no'/>
	I0903 23:29:42.308294  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   
	I0903 23:29:42.308307  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0903 23:29:42.308326  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |     <dhcp>
	I0903 23:29:42.308340  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0903 23:29:42.308348  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |     </dhcp>
	I0903 23:29:42.308353  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   </ip>
	I0903 23:29:42.308357  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   
	I0903 23:29:42.308364  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | </network>
	I0903 23:29:42.308368  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | 
	I0903 23:29:42.442760  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | trying to create private KVM network mk-NoKubernetes-561956 192.168.61.0/24...
	I0903 23:29:42.533804  151427 main.go:141] libmachine: (NoKubernetes-561956) setting up store path in /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 ...
	I0903 23:29:42.533833  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | private KVM network mk-NoKubernetes-561956 192.168.61.0/24 created
	I0903 23:29:42.533846  151427 main.go:141] libmachine: (NoKubernetes-561956) building disk image from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 23:29:42.533884  151427 main.go:141] libmachine: (NoKubernetes-561956) Downloading /home/jenkins/minikube-integration/21341-109162/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:29:42.533905  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.533331  151484 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:41.794645  150717 out.go:252]   - Generating certificates and keys ...
	I0903 23:29:41.794764  150717 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:29:41.794846  150717 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:29:42.074737  150717 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:29:42.755623  150717 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:29:43.059826  150717 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:29:43.209501  150717 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:29:43.430577  150717 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:29:43.431131  150717 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-689039 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0903 23:29:41.895391  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:29:41.895425  151207 machine.go:96] duration metric: took 6.57532365s to provisionDockerMachine
	I0903 23:29:41.895437  151207 start.go:293] postStartSetup for "pause-957460" (driver="kvm2")
	I0903 23:29:41.895449  151207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:29:41.895490  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:41.895847  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:29:41.895879  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:41.898901  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899360  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:41.899389  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899548  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:41.899744  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:41.899932  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:41.900084  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:41.983185  151207 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:29:41.988001  151207 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:29:41.988025  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:29:41.988098  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:29:41.988190  151207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:29:41.988294  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:29:42.000923  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:42.030779  151207 start.go:296] duration metric: took 135.327248ms for postStartSetup
	I0903 23:29:42.030820  151207 fix.go:56] duration metric: took 6.734411905s for fixHost
	I0903 23:29:42.030840  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.033700  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034091  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.034119  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034309  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.034516  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034674  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034876  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.035060  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:42.035271  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:42.035285  151207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:29:42.138745  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942182.132151632
	
	I0903 23:29:42.138781  151207 fix.go:216] guest clock: 1756942182.132151632
	I0903 23:29:42.138792  151207 fix.go:229] Guest: 2025-09-03 23:29:42.132151632 +0000 UTC Remote: 2025-09-03 23:29:42.030823493 +0000 UTC m=+8.383499424 (delta=101.328139ms)
	I0903 23:29:42.138820  151207 fix.go:200] guest clock delta is within tolerance: 101.328139ms
	I0903 23:29:42.138828  151207 start.go:83] releasing machines lock for "pause-957460", held for 6.842450059s
	I0903 23:29:42.138862  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.139187  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:42.142055  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142383  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.142413  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142557  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143061  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143240  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143335  151207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:29:42.143393  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.143425  151207 ssh_runner.go:195] Run: cat /version.json
	I0903 23:29:42.143446  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.146189  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146538  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.146560  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146588  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146748  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.146918  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147051  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.147064  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147076  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.147227  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.147233  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.147373  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147517  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147656  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.228035  151207 ssh_runner.go:195] Run: systemctl --version
	I0903 23:29:42.260654  151207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:29:42.412845  151207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:29:42.422007  151207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:29:42.422075  151207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:29:42.433083  151207 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0903 23:29:42.433112  151207 start.go:495] detecting cgroup driver to use...
	I0903 23:29:42.433177  151207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:29:42.458445  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:29:42.480967  151207 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:29:42.481031  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:29:42.498735  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:29:42.518587  151207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:29:42.721538  151207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:29:42.930794  151207 docker.go:234] disabling docker service ...
	I0903 23:29:42.930878  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:29:42.968840  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:29:42.984510  151207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:29:43.163361  151207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:29:43.333615  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:29:43.355212  151207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:29:43.385327  151207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:29:43.385414  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.400779  151207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:29:43.400847  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.413990  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.429581  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.444858  151207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:29:43.457225  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.472700  151207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.486584  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.498285  151207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:29:43.508096  151207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:29:43.520704  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:43.681356  151207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:29:45.289300  151207 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.607899314s)
	I0903 23:29:45.289340  151207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:29:45.289423  151207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:29:45.295877  151207 start.go:563] Will wait 60s for crictl version
	I0903 23:29:45.295941  151207 ssh_runner.go:195] Run: which crictl
	I0903 23:29:45.300396  151207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:29:45.338415  151207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:29:45.338517  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.376964  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.416935  151207 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:29:43.570321  150717 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:29:43.570668  150717 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-689039 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0903 23:29:43.700132  150717 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:29:44.316275  150717 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:29:44.849798  150717 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:29:44.849931  150717 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:29:45.186276  150717 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:29:45.807964  150717 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 23:29:46.023608  150717 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:29:46.266540  150717 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:29:46.555732  150717 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:29:46.555917  150717 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:29:46.559489  150717 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:29:43.085166  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.085026  151484 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/id_rsa...
	I0903 23:29:43.388213  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.388091  151484 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/NoKubernetes-561956.rawdisk...
	I0903 23:29:43.388238  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | Writing magic tar header
	I0903 23:29:43.388255  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | Writing SSH key tar header
	I0903 23:29:43.388266  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.388227  151484 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 ...
	I0903 23:29:43.388381  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956
	I0903 23:29:43.388404  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines
	I0903 23:29:43.388416  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 (perms=drwx------)
	I0903 23:29:43.388428  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines (perms=drwxr-xr-x)
	I0903 23:29:43.388434  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube (perms=drwxr-xr-x)
	I0903 23:29:43.388442  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162 (perms=drwxrwxr-x)
	I0903 23:29:43.388447  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0903 23:29:43.388455  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0903 23:29:43.388464  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:43.388470  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162
	I0903 23:29:43.388478  151427 main.go:141] libmachine: (NoKubernetes-561956) creating domain...
	I0903 23:29:43.388484  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0903 23:29:43.388489  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins
	I0903 23:29:43.388494  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home
	I0903 23:29:43.388500  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | skipping /home - not owner
	I0903 23:29:43.389990  151427 main.go:141] libmachine: (NoKubernetes-561956) define libvirt domain using xml: 
	I0903 23:29:43.390011  151427 main.go:141] libmachine: (NoKubernetes-561956) <domain type='kvm'>
	I0903 23:29:43.390021  151427 main.go:141] libmachine: (NoKubernetes-561956)   <name>NoKubernetes-561956</name>
	I0903 23:29:43.390028  151427 main.go:141] libmachine: (NoKubernetes-561956)   <memory unit='MiB'>3072</memory>
	I0903 23:29:43.390038  151427 main.go:141] libmachine: (NoKubernetes-561956)   <vcpu>2</vcpu>
	I0903 23:29:43.390051  151427 main.go:141] libmachine: (NoKubernetes-561956)   <features>
	I0903 23:29:43.390084  151427 main.go:141] libmachine: (NoKubernetes-561956)     <acpi/>
	I0903 23:29:43.390106  151427 main.go:141] libmachine: (NoKubernetes-561956)     <apic/>
	I0903 23:29:43.390120  151427 main.go:141] libmachine: (NoKubernetes-561956)     <pae/>
	I0903 23:29:43.390126  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390134  151427 main.go:141] libmachine: (NoKubernetes-561956)   </features>
	I0903 23:29:43.390142  151427 main.go:141] libmachine: (NoKubernetes-561956)   <cpu mode='host-passthrough'>
	I0903 23:29:43.390149  151427 main.go:141] libmachine: (NoKubernetes-561956)   
	I0903 23:29:43.390156  151427 main.go:141] libmachine: (NoKubernetes-561956)   </cpu>
	I0903 23:29:43.390163  151427 main.go:141] libmachine: (NoKubernetes-561956)   <os>
	I0903 23:29:43.390169  151427 main.go:141] libmachine: (NoKubernetes-561956)     <type>hvm</type>
	I0903 23:29:43.390177  151427 main.go:141] libmachine: (NoKubernetes-561956)     <boot dev='cdrom'/>
	I0903 23:29:43.390188  151427 main.go:141] libmachine: (NoKubernetes-561956)     <boot dev='hd'/>
	I0903 23:29:43.390197  151427 main.go:141] libmachine: (NoKubernetes-561956)     <bootmenu enable='no'/>
	I0903 23:29:43.390203  151427 main.go:141] libmachine: (NoKubernetes-561956)   </os>
	I0903 23:29:43.390213  151427 main.go:141] libmachine: (NoKubernetes-561956)   <devices>
	I0903 23:29:43.390220  151427 main.go:141] libmachine: (NoKubernetes-561956)     <disk type='file' device='cdrom'>
	I0903 23:29:43.390233  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/boot2docker.iso'/>
	I0903 23:29:43.390240  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target dev='hdc' bus='scsi'/>
	I0903 23:29:43.390247  151427 main.go:141] libmachine: (NoKubernetes-561956)       <readonly/>
	I0903 23:29:43.390253  151427 main.go:141] libmachine: (NoKubernetes-561956)     </disk>
	I0903 23:29:43.390276  151427 main.go:141] libmachine: (NoKubernetes-561956)     <disk type='file' device='disk'>
	I0903 23:29:43.390299  151427 main.go:141] libmachine: (NoKubernetes-561956)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0903 23:29:43.390317  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/NoKubernetes-561956.rawdisk'/>
	I0903 23:29:43.390329  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target dev='hda' bus='virtio'/>
	I0903 23:29:43.390341  151427 main.go:141] libmachine: (NoKubernetes-561956)     </disk>
	I0903 23:29:43.390351  151427 main.go:141] libmachine: (NoKubernetes-561956)     <interface type='network'>
	I0903 23:29:43.390362  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source network='mk-NoKubernetes-561956'/>
	I0903 23:29:43.390373  151427 main.go:141] libmachine: (NoKubernetes-561956)       <model type='virtio'/>
	I0903 23:29:43.390395  151427 main.go:141] libmachine: (NoKubernetes-561956)     </interface>
	I0903 23:29:43.390419  151427 main.go:141] libmachine: (NoKubernetes-561956)     <interface type='network'>
	I0903 23:29:43.390431  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source network='default'/>
	I0903 23:29:43.390441  151427 main.go:141] libmachine: (NoKubernetes-561956)       <model type='virtio'/>
	I0903 23:29:43.390450  151427 main.go:141] libmachine: (NoKubernetes-561956)     </interface>
	I0903 23:29:43.390459  151427 main.go:141] libmachine: (NoKubernetes-561956)     <serial type='pty'>
	I0903 23:29:43.390467  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target port='0'/>
	I0903 23:29:43.390476  151427 main.go:141] libmachine: (NoKubernetes-561956)     </serial>
	I0903 23:29:43.390491  151427 main.go:141] libmachine: (NoKubernetes-561956)     <console type='pty'>
	I0903 23:29:43.390507  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target type='serial' port='0'/>
	I0903 23:29:43.390519  151427 main.go:141] libmachine: (NoKubernetes-561956)     </console>
	I0903 23:29:43.390528  151427 main.go:141] libmachine: (NoKubernetes-561956)     <rng model='virtio'>
	I0903 23:29:43.390537  151427 main.go:141] libmachine: (NoKubernetes-561956)       <backend model='random'>/dev/random</backend>
	I0903 23:29:43.390546  151427 main.go:141] libmachine: (NoKubernetes-561956)     </rng>
	I0903 23:29:43.390555  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390561  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390573  151427 main.go:141] libmachine: (NoKubernetes-561956)   </devices>
	I0903 23:29:43.390591  151427 main.go:141] libmachine: (NoKubernetes-561956) </domain>
	I0903 23:29:43.390605  151427 main.go:141] libmachine: (NoKubernetes-561956) 
	I0903 23:29:43.411697  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:3a:6b:ee in network default
	I0903 23:29:43.412551  151427 main.go:141] libmachine: (NoKubernetes-561956) starting domain...
	I0903 23:29:43.412581  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:43.412590  151427 main.go:141] libmachine: (NoKubernetes-561956) ensuring networks are active...
	I0903 23:29:43.413453  151427 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network default is active
	I0903 23:29:43.413799  151427 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network mk-NoKubernetes-561956 is active
	I0903 23:29:43.414702  151427 main.go:141] libmachine: (NoKubernetes-561956) getting domain XML...
	I0903 23:29:43.415474  151427 main.go:141] libmachine: (NoKubernetes-561956) creating domain...
	I0903 23:29:45.175742  151427 main.go:141] libmachine: (NoKubernetes-561956) waiting for IP...
	I0903 23:29:45.176750  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.177251  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.177316  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.177241  151484 retry.go:31] will retry after 262.762338ms: waiting for domain to come up
	I0903 23:29:45.441968  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.442567  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.442622  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.442551  151484 retry.go:31] will retry after 291.065996ms: waiting for domain to come up
	I0903 23:29:45.735275  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.735779  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.735823  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.735775  151484 retry.go:31] will retry after 302.77737ms: waiting for domain to come up
	I0903 23:29:46.040606  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:46.041140  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:46.041670  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:46.041233  151484 retry.go:31] will retry after 598.623418ms: waiting for domain to come up
	I0903 23:29:46.642479  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:46.643238  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:46.643279  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:46.643199  151484 retry.go:31] will retry after 473.14286ms: waiting for domain to come up
	I0903 23:29:47.117795  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:47.118386  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:47.118418  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:47.118344  151484 retry.go:31] will retry after 617.307283ms: waiting for domain to come up
	I0903 23:29:46.561123  150717 out.go:252]   - Booting up control plane ...
	I0903 23:29:46.561241  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:29:46.561360  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:29:46.562434  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:29:46.588511  150717 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:29:46.588679  150717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 23:29:46.596478  150717 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 23:29:46.596695  150717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:29:46.596826  150717 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:29:46.783362  150717 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 23:29:46.783498  150717 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 23:29:47.785873  150717 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002389235s
	I0903 23:29:47.788738  150717 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 23:29:47.788852  150717 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.209:8443/livez
	I0903 23:29:47.788994  150717 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 23:29:47.789114  150717 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 23:29:45.417919  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:45.421328  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.421808  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:45.421837  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.422075  151207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:29:45.426580  151207 kubeadm.go:875] updating cluster {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:29:45.426697  151207 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:45.426736  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.473814  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.473844  151207 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:29:45.473895  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.520433  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.520461  151207 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:29:45.520472  151207 kubeadm.go:926] updating node { 192.168.39.90 8443 v1.34.0 crio true true} ...
	I0903 23:29:45.520584  151207 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-957460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:29:45.520661  151207 ssh_runner.go:195] Run: crio config
	I0903 23:29:45.572672  151207 cni.go:84] Creating CNI manager for ""
	I0903 23:29:45.572700  151207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:45.572716  151207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:29:45.572747  151207 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-957460 NodeName:pause-957460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:29:45.572933  151207 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-957460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:29:45.573025  151207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:29:45.587509  151207 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:29:45.587583  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:29:45.600483  151207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0903 23:29:45.623892  151207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:29:45.644207  151207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0903 23:29:45.664580  151207 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0903 23:29:45.668791  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:45.845844  151207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:29:45.870895  151207 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460 for IP: 192.168.39.90
	I0903 23:29:45.870920  151207 certs.go:194] generating shared ca certs ...
	I0903 23:29:45.870936  151207 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:45.871121  151207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:29:45.871183  151207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:29:45.871197  151207 certs.go:256] generating profile certs ...
	I0903 23:29:45.871284  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/client.key
	I0903 23:29:45.871344  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key.13718f5a
	I0903 23:29:45.871381  151207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key
	I0903 23:29:45.871484  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:29:45.871510  151207 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:29:45.871520  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:29:45.871541  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:29:45.871565  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:29:45.871602  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:29:45.871661  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:45.872248  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:29:45.905809  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:29:46.013449  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:29:46.065527  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:29:46.126258  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0903 23:29:46.218401  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:29:46.317576  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:29:46.405885  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:29:46.468227  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:29:46.540356  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:29:46.628688  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:29:46.727214  151207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:29:46.777080  151207 ssh_runner.go:195] Run: openssl version
	I0903 23:29:46.796340  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:29:46.821701  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833705  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833779  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.845539  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:29:46.868263  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:29:46.890836  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902613  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.915399  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:29:46.936675  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:29:47.040563  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056605  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.072040  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:29:47.101448  151207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:29:47.111388  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:29:47.125496  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:29:47.142359  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:29:47.157193  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:29:47.169376  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:29:47.180485  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:29:47.191874  151207 kubeadm.go:392] StartCluster: {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:47.192024  151207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:29:47.192086  151207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:29:47.262205  151207 cri.go:89] found id: "10e8e7d7bd3e7aac3752bd071d990274ecb8edc847cf6261efa2c66baf0d994b"
	I0903 23:29:47.262238  151207 cri.go:89] found id: "1b24e6d40da9c16e6c760fabd44817047863d2e5bac5ea60d85bb264100b7c73"
	I0903 23:29:47.262244  151207 cri.go:89] found id: "aa9d14c5b4e3ab0d9200e6db3849d49689c17a66f1418ddc40f9c7abca252cdf"
	I0903 23:29:47.262248  151207 cri.go:89] found id: "2337c9c17a585736200e843c09f9dc0d4ed47cc2d8a8aa8a77f42e9548c11e5e"
	I0903 23:29:47.262252  151207 cri.go:89] found id: "63a5274b0c5bfadea8983b60493d6610cc81c20b75987c71017aafd565687523"
	I0903 23:29:47.262256  151207 cri.go:89] found id: "622bd13cd8cac3d51aa7b0cafd1834f8de52c46ccd69532fe8bf3a6eb4a2e49d"
	I0903 23:29:47.262261  151207 cri.go:89] found id: "bf81fb211da095af4350a78f944ae302c860603d85647c92df059e7bab1bf58b"
	I0903 23:29:47.262266  151207 cri.go:89] found id: "235fbdc3e7ec406c66669bdd536b8030197b6f88152ff1ad09a72dcac8975024"
	I0903 23:29:47.262270  151207 cri.go:89] found id: "188275cc44fc3fba51ff3713eaf778fb8a952b28dbcea50ec838f84764dfebca"
	I0903 23:29:47.262278  151207 cri.go:89] found id: "4897d1fe35dcfa698a0f2777d418a4d07ee29d0345b9b1a7efaea54df6234af0"
	I0903 23:29:47.262282  151207 cri.go:89] found id: ""
	I0903 23:29:47.262343  151207 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-957460 -n pause-957460
helpers_test.go:269: (dbg) Run:  kubectl --context pause-957460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-957460 -n pause-957460
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-957460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-957460 logs -n 25: (1.367120654s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-380966 sudo containerd config dump                                                                                                                │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ ssh     │ -p cilium-380966 sudo crio config                                                                                                                           │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │                     │
	│ delete  │ -p cilium-380966                                                                                                                                            │ cilium-380966             │ jenkins │ v1.36.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:25 UTC │
	│ start   │ -p running-upgrade-210842 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ minikube                  │ jenkins │ v1.26.0 │ 03 Sep 25 23:25 UTC │ 03 Sep 25 23:27 UTC │
	│ delete  │ -p offline-crio-911470                                                                                                                                      │ offline-crio-911470       │ jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:26 UTC │
	│ start   │ -p force-systemd-flag-037213 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                   │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:26 UTC │ 03 Sep 25 23:27 UTC │
	│ stop    │ stopped-upgrade-924805 stop                                                                                                                                 │ minikube                  │ jenkins │ v1.26.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p running-upgrade-210842 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ ssh     │ force-systemd-flag-037213 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                        │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:27 UTC │
	│ delete  │ -p force-systemd-flag-037213                                                                                                                                │ force-systemd-flag-037213 │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:27 UTC │
	│ start   │ -p force-systemd-env-753758 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                    │ force-systemd-env-753758  │ jenkins │ v1.36.0 │ 03 Sep 25 23:27 UTC │ 03 Sep 25 23:28 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-210842 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │                     │
	│ delete  │ -p running-upgrade-210842                                                                                                                                   │ running-upgrade-210842    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p pause-957460 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ delete  │ -p force-systemd-env-753758                                                                                                                                 │ force-systemd-env-753758  │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:28 UTC │
	│ start   │ -p stopped-upgrade-924805 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p cert-expiration-689039 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                        │ cert-expiration-689039    │ jenkins │ v1.36.0 │ 03 Sep 25 23:28 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p pause-957460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-957460              │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-924805 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-924805                                                                                                                                   │ stopped-upgrade-924805    │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │ 03 Sep 25 23:29 UTC │
	│ start   │ -p NoKubernetes-561956 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio                                                    │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	│ start   │ -p NoKubernetes-561956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                         │ NoKubernetes-561956       │ jenkins │ v1.36.0 │ 03 Sep 25 23:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:29:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:29:37.617565  151427 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:29:37.617837  151427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:37.617847  151427 out.go:374] Setting ErrFile to fd 2...
	I0903 23:29:37.617851  151427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:29:37.618022  151427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:29:37.618738  151427 out.go:368] Setting JSON to false
	I0903 23:29:37.620242  151427 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7922,"bootTime":1756934256,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:29:37.620328  151427 start.go:140] virtualization: kvm guest
	I0903 23:29:37.621870  151427 out.go:179] * [NoKubernetes-561956] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:29:37.623089  151427 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:29:37.623074  151427 notify.go:220] Checking for updates...
	I0903 23:29:37.625055  151427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:29:37.626094  151427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:29:37.627105  151427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:37.628230  151427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:29:37.629149  151427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:29:37.630648  151427 config.go:182] Loaded profile config "cert-expiration-689039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:37.630810  151427 config.go:182] Loaded profile config "kubernetes-upgrade-938492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:29:37.631008  151427 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:37.631148  151427 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:29:37.677831  151427 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 23:29:37.678837  151427 start.go:304] selected driver: kvm2
	I0903 23:29:37.678857  151427 start.go:918] validating driver "kvm2" against <nil>
	I0903 23:29:37.678872  151427 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:29:37.679951  151427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:37.680041  151427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:29:37.699686  151427 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:29:37.699755  151427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:29:37.700140  151427 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 23:29:37.700182  151427 cni.go:84] Creating CNI manager for ""
	I0903 23:29:37.700250  151427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:37.700265  151427 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 23:29:37.700365  151427 start.go:348] cluster config:
	{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-561956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:37.700556  151427 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:29:37.702896  151427 out.go:179] * Starting "NoKubernetes-561956" primary control-plane node in "NoKubernetes-561956" cluster
	I0903 23:29:37.040830  150717 main.go:141] libmachine: (cert-expiration-689039) Calling .GetIP
	I0903 23:29:37.225934  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | domain cert-expiration-689039 has defined MAC address 52:54:00:d9:56:92 in network mk-cert-expiration-689039
	I0903 23:29:37.226490  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:56:92", ip: ""} in network mk-cert-expiration-689039: {Iface:virbr4 ExpiryTime:2025-09-04 00:29:26 +0000 UTC Type:0 Mac:52:54:00:d9:56:92 Iaid: IPaddr:192.168.72.209 Prefix:24 Hostname:cert-expiration-689039 Clientid:01:52:54:00:d9:56:92}
	I0903 23:29:37.226535  150717 main.go:141] libmachine: (cert-expiration-689039) DBG | domain cert-expiration-689039 has defined IP address 192.168.72.209 and MAC address 52:54:00:d9:56:92 in network mk-cert-expiration-689039
	I0903 23:29:37.226717  150717 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0903 23:29:37.231114  150717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:29:37.245104  150717 kubeadm.go:875] updating cluster {Name:cert-expiration-689039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:29:37.245194  150717 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:37.245232  150717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:37.279041  150717 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:29:37.279095  150717 ssh_runner.go:195] Run: which lz4
	I0903 23:29:37.283067  150717 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:29:37.287290  150717 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:29:37.287309  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:29:35.320068  151207 out.go:252] * Updating the running kvm2 "pause-957460" VM ...
	I0903 23:29:35.320093  151207 machine.go:93] provisionDockerMachine start ...
	I0903 23:29:35.320104  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:35.320298  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.322936  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323335  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.323360  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.323507  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.323672  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.323905  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.324050  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.324227  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.324516  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.324531  151207 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:29:35.438588  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.438635  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.438889  151207 buildroot.go:166] provisioning hostname "pause-957460"
	I0903 23:29:35.438917  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.439115  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.442456  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.442962  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.442995  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.443174  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.443378  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443535  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.443677  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.443850  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.444144  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.444166  151207 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-957460 && echo "pause-957460" | sudo tee /etc/hostname
	I0903 23:29:35.573886  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-957460
	
	I0903 23:29:35.573920  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.576696  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577038  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.577066  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.577228  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:35.577436  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577619  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:35.577790  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:35.577973  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:35.578213  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:35.578230  151207 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-957460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-957460/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-957460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:29:35.694230  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:29:35.694259  151207 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:29:35.694283  151207 buildroot.go:174] setting up certificates
	I0903 23:29:35.694293  151207 provision.go:84] configureAuth start
	I0903 23:29:35.694306  151207 main.go:141] libmachine: (pause-957460) Calling .GetMachineName
	I0903 23:29:35.694577  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:35.697672  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698086  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.698117  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.698311  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:35.701203  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701549  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:35.701579  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:35.701721  151207 provision.go:143] copyHostCerts
	I0903 23:29:35.701782  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:29:35.701805  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:29:35.701858  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:29:35.701943  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:29:35.701951  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:29:35.701970  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:29:35.702034  151207 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:29:35.702041  151207 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:29:35.702057  151207 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:29:35.702102  151207 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.pause-957460 san=[127.0.0.1 192.168.39.90 localhost minikube pause-957460]
	I0903 23:29:36.149133  151207 provision.go:177] copyRemoteCerts
	I0903 23:29:36.149198  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:29:36.149231  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.152291  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.152816  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.152856  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.153010  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.153260  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.153486  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.153734  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:36.250599  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:29:36.281149  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0903 23:29:36.316873  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:29:36.349796  151207 provision.go:87] duration metric: took 655.486761ms to configureAuth
	I0903 23:29:36.349828  151207 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:29:36.350111  151207 config.go:182] Loaded profile config "pause-957460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:29:36.350220  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:36.354817  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355255  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:36.355286  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:36.355529  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:36.355726  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.355907  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:36.356133  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:36.356322  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:36.356592  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:36.356619  151207 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:29:38.737414  150717 crio.go:462] duration metric: took 1.454386204s to copy over tarball
	I0903 23:29:38.737486  150717 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:29:40.217850  150717 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.480338058s)
	I0903 23:29:40.217869  150717 crio.go:469] duration metric: took 1.480430934s to extract the tarball
	I0903 23:29:40.217876  150717 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:29:40.268917  150717 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:40.311990  150717 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:40.312003  150717 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:29:40.312009  150717 kubeadm.go:926] updating node { 192.168.72.209 8443 v1.34.0 crio true true} ...
	I0903 23:29:40.312099  150717 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-689039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:29:40.312160  150717 ssh_runner.go:195] Run: crio config
	I0903 23:29:40.354543  150717 cni.go:84] Creating CNI manager for ""
	I0903 23:29:40.354552  150717 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:40.354563  150717 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:29:40.354582  150717 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.209 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-689039 NodeName:cert-expiration-689039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:29:40.354691  150717 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-689039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:29:40.354753  150717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:29:40.365952  150717 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:29:40.366009  150717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:29:40.376758  150717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0903 23:29:40.395582  150717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:29:40.413357  150717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0903 23:29:40.432029  150717 ssh_runner.go:195] Run: grep 192.168.72.209	control-plane.minikube.internal$ /etc/hosts
	I0903 23:29:40.435770  150717 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:29:40.448647  150717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:40.581151  150717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:29:40.616093  150717 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039 for IP: 192.168.72.209
	I0903 23:29:40.616109  150717 certs.go:194] generating shared ca certs ...
	I0903 23:29:40.616132  150717 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.616366  150717 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:29:40.616422  150717 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:29:40.616430  150717 certs.go:256] generating profile certs ...
	I0903 23:29:40.616505  150717 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key
	I0903 23:29:40.616534  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt with IP's: []
	I0903 23:29:40.677306  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt ...
	I0903 23:29:40.677323  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.crt: {Name:mkf5ecbd814becf066c6e6bb04332cd6714539dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.677511  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key ...
	I0903 23:29:40.677520  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/client.key: {Name:mkc7814d5f13210244d374251f5b47585e9945d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.677597  150717 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c
	I0903 23:29:40.677609  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.209]
	I0903 23:29:40.866917  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c ...
	I0903 23:29:40.866933  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c: {Name:mk0d8bf488d245574c04d2605617a7f7e8132bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.867095  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c ...
	I0903 23:29:40.867104  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c: {Name:mk6167336a0e6aa19740165e42279e71f0f8fa9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:40.867175  150717 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt.3a0ec45c -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt
	I0903 23:29:40.867265  150717 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key.3a0ec45c -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key
	I0903 23:29:40.867313  150717 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key
	I0903 23:29:40.867324  150717 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt with IP's: []
	I0903 23:29:41.012621  150717 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt ...
	I0903 23:29:41.012643  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt: {Name:mk3d8edf4f96f3056f32313cfcb531f0e5fc62e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:41.012800  150717 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key ...
	I0903 23:29:41.012808  150717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key: {Name:mk2afd64f3163274f9651082a04d952689eac296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:41.012967  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:29:41.012996  150717 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:29:41.013002  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:29:41.013022  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:29:41.013040  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:29:41.013057  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:29:41.013089  150717 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:41.013641  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:29:41.044595  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:29:41.070722  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:29:41.096419  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:29:41.122874  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:29:41.149323  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:29:41.175278  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:29:41.201431  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/cert-expiration-689039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:29:41.227473  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:29:41.254001  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:29:41.286624  150717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:29:41.322404  150717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:29:41.343505  150717 ssh_runner.go:195] Run: openssl version
	I0903 23:29:41.349535  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:29:41.361277  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.365732  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.365774  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:41.372331  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:29:41.383491  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:29:41.394964  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.399451  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.399489  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:29:41.406076  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:29:41.417678  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:29:41.429470  150717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.433897  150717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.433944  150717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:29:41.440408  150717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:29:41.451765  150717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:29:41.455960  150717 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:29:41.456002  150717 kubeadm.go:392] StartCluster: {Name:cert-expiration-689039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.0 ClusterName:cert-expiration-689039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.209 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:41.456057  150717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:29:41.456115  150717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:29:41.498507  150717 cri.go:89] found id: ""
	I0903 23:29:41.498564  150717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:29:41.510278  150717 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:29:41.523326  150717 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:29:41.536361  150717 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:29:41.536372  150717 kubeadm.go:157] found existing configuration files:
	
	I0903 23:29:41.536426  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:29:41.546666  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:29:41.546713  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:29:41.558154  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:29:41.567679  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:29:41.567734  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:29:41.578000  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:29:41.587798  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:29:41.587839  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:29:41.598371  150717 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:29:41.608203  150717 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:29:41.608249  150717 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:29:41.618331  150717 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:29:41.669729  150717 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 23:29:41.669812  150717 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:29:41.779248  150717 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:29:41.779427  150717 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:29:41.779567  150717 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 23:29:41.793087  150717 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:29:37.703939  151427 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:37.703998  151427 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:29:37.704018  151427 cache.go:58] Caching tarball of preloaded images
	I0903 23:29:37.704131  151427 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:29:37.704156  151427 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:29:37.704309  151427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/NoKubernetes-561956/config.json ...
	I0903 23:29:37.704340  151427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/NoKubernetes-561956/config.json: {Name:mka5d765d95b98338c3890877dd6523d7b0bbc4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:37.704551  151427 start.go:360] acquireMachinesLock for NoKubernetes-561956: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:29:42.138928  151427 start.go:364] duration metric: took 4.434338115s to acquireMachinesLock for "NoKubernetes-561956"
	I0903 23:29:42.139009  151427 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-561956 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.0 ClusterName:NoKubernetes-561956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:29:42.139126  151427 start.go:125] createHost starting for "" (driver="kvm2")
	I0903 23:29:42.281042  151427 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:29:42.281329  151427 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:29:42.281409  151427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:29:42.300134  151427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0903 23:29:42.300614  151427 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:29:42.301198  151427 main.go:141] libmachine: Using API Version  1
	I0903 23:29:42.301224  151427 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:29:42.301640  151427 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:29:42.301877  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .GetMachineName
	I0903 23:29:42.302015  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .DriverName
	I0903 23:29:42.302145  151427 start.go:159] libmachine.API.Create for "NoKubernetes-561956" (driver="kvm2")
	I0903 23:29:42.302186  151427 client.go:168] LocalClient.Create starting
	I0903 23:29:42.302224  151427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem
	I0903 23:29:42.302265  151427 main.go:141] libmachine: Decoding PEM data...
	I0903 23:29:42.302288  151427 main.go:141] libmachine: Parsing certificate...
	I0903 23:29:42.302357  151427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem
	I0903 23:29:42.302385  151427 main.go:141] libmachine: Decoding PEM data...
	I0903 23:29:42.302402  151427 main.go:141] libmachine: Parsing certificate...
	I0903 23:29:42.302425  151427 main.go:141] libmachine: Running pre-create checks...
	I0903 23:29:42.302438  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .PreCreateCheck
	I0903 23:29:42.302805  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .GetConfigRaw
	I0903 23:29:42.303329  151427 main.go:141] libmachine: Creating machine...
	I0903 23:29:42.303351  151427 main.go:141] libmachine: (NoKubernetes-561956) Calling .Create
	I0903 23:29:42.303475  151427 main.go:141] libmachine: (NoKubernetes-561956) creating KVM machine...
	I0903 23:29:42.303495  151427 main.go:141] libmachine: (NoKubernetes-561956) creating network...
	I0903 23:29:42.304712  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | found existing default KVM network
	I0903 23:29:42.305915  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.305735  151484 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:ac:01} reservation:<nil>}
	I0903 23:29:42.306700  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.306618  151484 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b0:b1:0b} reservation:<nil>}
	I0903 23:29:42.308206  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.308113  151484 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027cdf0}
	I0903 23:29:42.308235  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | created network xml: 
	I0903 23:29:42.308262  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | <network>
	I0903 23:29:42.308275  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <name>mk-NoKubernetes-561956</name>
	I0903 23:29:42.308287  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <dns enable='no'/>
	I0903 23:29:42.308294  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   
	I0903 23:29:42.308307  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0903 23:29:42.308326  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |     <dhcp>
	I0903 23:29:42.308340  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0903 23:29:42.308348  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |     </dhcp>
	I0903 23:29:42.308353  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   </ip>
	I0903 23:29:42.308357  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG |   
	I0903 23:29:42.308364  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | </network>
	I0903 23:29:42.308368  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | 
	I0903 23:29:42.442760  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | trying to create private KVM network mk-NoKubernetes-561956 192.168.61.0/24...
	I0903 23:29:42.533804  151427 main.go:141] libmachine: (NoKubernetes-561956) setting up store path in /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 ...
	I0903 23:29:42.533833  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | private KVM network mk-NoKubernetes-561956 192.168.61.0/24 created
	I0903 23:29:42.533846  151427 main.go:141] libmachine: (NoKubernetes-561956) building disk image from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 23:29:42.533884  151427 main.go:141] libmachine: (NoKubernetes-561956) Downloading /home/jenkins/minikube-integration/21341-109162/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:29:42.533905  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:42.533331  151484 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:41.794645  150717 out.go:252]   - Generating certificates and keys ...
	I0903 23:29:41.794764  150717 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:29:41.794846  150717 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:29:42.074737  150717 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:29:42.755623  150717 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:29:43.059826  150717 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:29:43.209501  150717 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:29:43.430577  150717 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:29:43.431131  150717 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-689039 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0903 23:29:41.895391  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:29:41.895425  151207 machine.go:96] duration metric: took 6.57532365s to provisionDockerMachine
	I0903 23:29:41.895437  151207 start.go:293] postStartSetup for "pause-957460" (driver="kvm2")
	I0903 23:29:41.895449  151207 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:29:41.895490  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:41.895847  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:29:41.895879  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:41.898901  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899360  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:41.899389  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:41.899548  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:41.899744  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:41.899932  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:41.900084  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:41.983185  151207 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:29:41.988001  151207 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:29:41.988025  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:29:41.988098  151207 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:29:41.988190  151207 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:29:41.988294  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:29:42.000923  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:42.030779  151207 start.go:296] duration metric: took 135.327248ms for postStartSetup
	I0903 23:29:42.030820  151207 fix.go:56] duration metric: took 6.734411905s for fixHost
	I0903 23:29:42.030840  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.033700  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034091  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.034119  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.034309  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.034516  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034674  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.034876  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.035060  151207 main.go:141] libmachine: Using SSH client type: native
	I0903 23:29:42.035271  151207 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0903 23:29:42.035285  151207 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:29:42.138745  151207 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942182.132151632
	
	I0903 23:29:42.138781  151207 fix.go:216] guest clock: 1756942182.132151632
	I0903 23:29:42.138792  151207 fix.go:229] Guest: 2025-09-03 23:29:42.132151632 +0000 UTC Remote: 2025-09-03 23:29:42.030823493 +0000 UTC m=+8.383499424 (delta=101.328139ms)
	I0903 23:29:42.138820  151207 fix.go:200] guest clock delta is within tolerance: 101.328139ms
	I0903 23:29:42.138828  151207 start.go:83] releasing machines lock for "pause-957460", held for 6.842450059s
	I0903 23:29:42.138862  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.139187  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:42.142055  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142383  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.142413  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.142557  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143061  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143240  151207 main.go:141] libmachine: (pause-957460) Calling .DriverName
	I0903 23:29:42.143335  151207 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:29:42.143393  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.143425  151207 ssh_runner.go:195] Run: cat /version.json
	I0903 23:29:42.143446  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHHostname
	I0903 23:29:42.146189  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146538  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.146560  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146588  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.146748  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.146918  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147051  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:42.147064  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147076  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:42.147227  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHPort
	I0903 23:29:42.147233  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.147373  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHKeyPath
	I0903 23:29:42.147517  151207 main.go:141] libmachine: (pause-957460) Calling .GetSSHUsername
	I0903 23:29:42.147656  151207 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/pause-957460/id_rsa Username:docker}
	I0903 23:29:42.228035  151207 ssh_runner.go:195] Run: systemctl --version
	I0903 23:29:42.260654  151207 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:29:42.412845  151207 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:29:42.422007  151207 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:29:42.422075  151207 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:29:42.433083  151207 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0903 23:29:42.433112  151207 start.go:495] detecting cgroup driver to use...
	I0903 23:29:42.433177  151207 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:29:42.458445  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:29:42.480967  151207 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:29:42.481031  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:29:42.498735  151207 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:29:42.518587  151207 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:29:42.721538  151207 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:29:42.930794  151207 docker.go:234] disabling docker service ...
	I0903 23:29:42.930878  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:29:42.968840  151207 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:29:42.984510  151207 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:29:43.163361  151207 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:29:43.333615  151207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:29:43.355212  151207 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:29:43.385327  151207 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:29:43.385414  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.400779  151207 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:29:43.400847  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.413990  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.429581  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.444858  151207 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:29:43.457225  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.472700  151207 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.486584  151207 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:29:43.498285  151207 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:29:43.508096  151207 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:29:43.520704  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:43.681356  151207 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:29:45.289300  151207 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.607899314s)
	I0903 23:29:45.289340  151207 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:29:45.289423  151207 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:29:45.295877  151207 start.go:563] Will wait 60s for crictl version
	I0903 23:29:45.295941  151207 ssh_runner.go:195] Run: which crictl
	I0903 23:29:45.300396  151207 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:29:45.338415  151207 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:29:45.338517  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.376964  151207 ssh_runner.go:195] Run: crio --version
	I0903 23:29:45.416935  151207 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:29:43.570321  150717 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:29:43.570668  150717 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-689039 localhost] and IPs [192.168.72.209 127.0.0.1 ::1]
	I0903 23:29:43.700132  150717 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:29:44.316275  150717 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:29:44.849798  150717 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:29:44.849931  150717 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:29:45.186276  150717 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:29:45.807964  150717 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 23:29:46.023608  150717 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:29:46.266540  150717 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:29:46.555732  150717 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:29:46.555917  150717 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:29:46.559489  150717 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:29:43.085166  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.085026  151484 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/id_rsa...
	I0903 23:29:43.388213  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.388091  151484 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/NoKubernetes-561956.rawdisk...
	I0903 23:29:43.388238  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | Writing magic tar header
	I0903 23:29:43.388255  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | Writing SSH key tar header
	I0903 23:29:43.388266  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:43.388227  151484 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 ...
	I0903 23:29:43.388381  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956
	I0903 23:29:43.388404  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines
	I0903 23:29:43.388416  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956 (perms=drwx------)
	I0903 23:29:43.388428  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines (perms=drwxr-xr-x)
	I0903 23:29:43.388434  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube (perms=drwxr-xr-x)
	I0903 23:29:43.388442  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration/21341-109162 (perms=drwxrwxr-x)
	I0903 23:29:43.388447  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0903 23:29:43.388455  151427 main.go:141] libmachine: (NoKubernetes-561956) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0903 23:29:43.388464  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:29:43.388470  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162
	I0903 23:29:43.388478  151427 main.go:141] libmachine: (NoKubernetes-561956) creating domain...
	I0903 23:29:43.388484  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0903 23:29:43.388489  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home/jenkins
	I0903 23:29:43.388494  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | checking permissions on dir: /home
	I0903 23:29:43.388500  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | skipping /home - not owner
	I0903 23:29:43.389990  151427 main.go:141] libmachine: (NoKubernetes-561956) define libvirt domain using xml: 
	I0903 23:29:43.390011  151427 main.go:141] libmachine: (NoKubernetes-561956) <domain type='kvm'>
	I0903 23:29:43.390021  151427 main.go:141] libmachine: (NoKubernetes-561956)   <name>NoKubernetes-561956</name>
	I0903 23:29:43.390028  151427 main.go:141] libmachine: (NoKubernetes-561956)   <memory unit='MiB'>3072</memory>
	I0903 23:29:43.390038  151427 main.go:141] libmachine: (NoKubernetes-561956)   <vcpu>2</vcpu>
	I0903 23:29:43.390051  151427 main.go:141] libmachine: (NoKubernetes-561956)   <features>
	I0903 23:29:43.390084  151427 main.go:141] libmachine: (NoKubernetes-561956)     <acpi/>
	I0903 23:29:43.390106  151427 main.go:141] libmachine: (NoKubernetes-561956)     <apic/>
	I0903 23:29:43.390120  151427 main.go:141] libmachine: (NoKubernetes-561956)     <pae/>
	I0903 23:29:43.390126  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390134  151427 main.go:141] libmachine: (NoKubernetes-561956)   </features>
	I0903 23:29:43.390142  151427 main.go:141] libmachine: (NoKubernetes-561956)   <cpu mode='host-passthrough'>
	I0903 23:29:43.390149  151427 main.go:141] libmachine: (NoKubernetes-561956)   
	I0903 23:29:43.390156  151427 main.go:141] libmachine: (NoKubernetes-561956)   </cpu>
	I0903 23:29:43.390163  151427 main.go:141] libmachine: (NoKubernetes-561956)   <os>
	I0903 23:29:43.390169  151427 main.go:141] libmachine: (NoKubernetes-561956)     <type>hvm</type>
	I0903 23:29:43.390177  151427 main.go:141] libmachine: (NoKubernetes-561956)     <boot dev='cdrom'/>
	I0903 23:29:43.390188  151427 main.go:141] libmachine: (NoKubernetes-561956)     <boot dev='hd'/>
	I0903 23:29:43.390197  151427 main.go:141] libmachine: (NoKubernetes-561956)     <bootmenu enable='no'/>
	I0903 23:29:43.390203  151427 main.go:141] libmachine: (NoKubernetes-561956)   </os>
	I0903 23:29:43.390213  151427 main.go:141] libmachine: (NoKubernetes-561956)   <devices>
	I0903 23:29:43.390220  151427 main.go:141] libmachine: (NoKubernetes-561956)     <disk type='file' device='cdrom'>
	I0903 23:29:43.390233  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/boot2docker.iso'/>
	I0903 23:29:43.390240  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target dev='hdc' bus='scsi'/>
	I0903 23:29:43.390247  151427 main.go:141] libmachine: (NoKubernetes-561956)       <readonly/>
	I0903 23:29:43.390253  151427 main.go:141] libmachine: (NoKubernetes-561956)     </disk>
	I0903 23:29:43.390276  151427 main.go:141] libmachine: (NoKubernetes-561956)     <disk type='file' device='disk'>
	I0903 23:29:43.390299  151427 main.go:141] libmachine: (NoKubernetes-561956)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0903 23:29:43.390317  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/NoKubernetes-561956/NoKubernetes-561956.rawdisk'/>
	I0903 23:29:43.390329  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target dev='hda' bus='virtio'/>
	I0903 23:29:43.390341  151427 main.go:141] libmachine: (NoKubernetes-561956)     </disk>
	I0903 23:29:43.390351  151427 main.go:141] libmachine: (NoKubernetes-561956)     <interface type='network'>
	I0903 23:29:43.390362  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source network='mk-NoKubernetes-561956'/>
	I0903 23:29:43.390373  151427 main.go:141] libmachine: (NoKubernetes-561956)       <model type='virtio'/>
	I0903 23:29:43.390395  151427 main.go:141] libmachine: (NoKubernetes-561956)     </interface>
	I0903 23:29:43.390419  151427 main.go:141] libmachine: (NoKubernetes-561956)     <interface type='network'>
	I0903 23:29:43.390431  151427 main.go:141] libmachine: (NoKubernetes-561956)       <source network='default'/>
	I0903 23:29:43.390441  151427 main.go:141] libmachine: (NoKubernetes-561956)       <model type='virtio'/>
	I0903 23:29:43.390450  151427 main.go:141] libmachine: (NoKubernetes-561956)     </interface>
	I0903 23:29:43.390459  151427 main.go:141] libmachine: (NoKubernetes-561956)     <serial type='pty'>
	I0903 23:29:43.390467  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target port='0'/>
	I0903 23:29:43.390476  151427 main.go:141] libmachine: (NoKubernetes-561956)     </serial>
	I0903 23:29:43.390491  151427 main.go:141] libmachine: (NoKubernetes-561956)     <console type='pty'>
	I0903 23:29:43.390507  151427 main.go:141] libmachine: (NoKubernetes-561956)       <target type='serial' port='0'/>
	I0903 23:29:43.390519  151427 main.go:141] libmachine: (NoKubernetes-561956)     </console>
	I0903 23:29:43.390528  151427 main.go:141] libmachine: (NoKubernetes-561956)     <rng model='virtio'>
	I0903 23:29:43.390537  151427 main.go:141] libmachine: (NoKubernetes-561956)       <backend model='random'>/dev/random</backend>
	I0903 23:29:43.390546  151427 main.go:141] libmachine: (NoKubernetes-561956)     </rng>
	I0903 23:29:43.390555  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390561  151427 main.go:141] libmachine: (NoKubernetes-561956)     
	I0903 23:29:43.390573  151427 main.go:141] libmachine: (NoKubernetes-561956)   </devices>
	I0903 23:29:43.390591  151427 main.go:141] libmachine: (NoKubernetes-561956) </domain>
	I0903 23:29:43.390605  151427 main.go:141] libmachine: (NoKubernetes-561956) 
	I0903 23:29:43.411697  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:3a:6b:ee in network default
	I0903 23:29:43.412551  151427 main.go:141] libmachine: (NoKubernetes-561956) starting domain...
	I0903 23:29:43.412581  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:43.412590  151427 main.go:141] libmachine: (NoKubernetes-561956) ensuring networks are active...
	I0903 23:29:43.413453  151427 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network default is active
	I0903 23:29:43.413799  151427 main.go:141] libmachine: (NoKubernetes-561956) Ensuring network mk-NoKubernetes-561956 is active
	I0903 23:29:43.414702  151427 main.go:141] libmachine: (NoKubernetes-561956) getting domain XML...
	I0903 23:29:43.415474  151427 main.go:141] libmachine: (NoKubernetes-561956) creating domain...
	I0903 23:29:45.175742  151427 main.go:141] libmachine: (NoKubernetes-561956) waiting for IP...
	I0903 23:29:45.176750  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.177251  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.177316  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.177241  151484 retry.go:31] will retry after 262.762338ms: waiting for domain to come up
	I0903 23:29:45.441968  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.442567  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.442622  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.442551  151484 retry.go:31] will retry after 291.065996ms: waiting for domain to come up
	I0903 23:29:45.735275  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:45.735779  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:45.735823  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:45.735775  151484 retry.go:31] will retry after 302.77737ms: waiting for domain to come up
	I0903 23:29:46.040606  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:46.041140  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:46.041670  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:46.041233  151484 retry.go:31] will retry after 598.623418ms: waiting for domain to come up
	I0903 23:29:46.642479  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:46.643238  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:46.643279  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:46.643199  151484 retry.go:31] will retry after 473.14286ms: waiting for domain to come up
	I0903 23:29:47.117795  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | domain NoKubernetes-561956 has defined MAC address 52:54:00:64:e0:3d in network mk-NoKubernetes-561956
	I0903 23:29:47.118386  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | unable to find current IP address of domain NoKubernetes-561956 in network mk-NoKubernetes-561956
	I0903 23:29:47.118418  151427 main.go:141] libmachine: (NoKubernetes-561956) DBG | I0903 23:29:47.118344  151484 retry.go:31] will retry after 617.307283ms: waiting for domain to come up
	I0903 23:29:46.561123  150717 out.go:252]   - Booting up control plane ...
	I0903 23:29:46.561241  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:29:46.561360  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:29:46.562434  150717 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:29:46.588511  150717 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:29:46.588679  150717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 23:29:46.596478  150717 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 23:29:46.596695  150717 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:29:46.596826  150717 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:29:46.783362  150717 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 23:29:46.783498  150717 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 23:29:47.785873  150717 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002389235s
	I0903 23:29:47.788738  150717 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 23:29:47.788852  150717 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.72.209:8443/livez
	I0903 23:29:47.788994  150717 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 23:29:47.789114  150717 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 23:29:45.417919  151207 main.go:141] libmachine: (pause-957460) Calling .GetIP
	I0903 23:29:45.421328  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.421808  151207 main.go:141] libmachine: (pause-957460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:df:e5", ip: ""} in network mk-pause-957460: {Iface:virbr1 ExpiryTime:2025-09-04 00:28:42 +0000 UTC Type:0 Mac:52:54:00:00:df:e5 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:pause-957460 Clientid:01:52:54:00:00:df:e5}
	I0903 23:29:45.421837  151207 main.go:141] libmachine: (pause-957460) DBG | domain pause-957460 has defined IP address 192.168.39.90 and MAC address 52:54:00:00:df:e5 in network mk-pause-957460
	I0903 23:29:45.422075  151207 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:29:45.426580  151207 kubeadm.go:875] updating cluster {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0
ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:29:45.426697  151207 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:29:45.426736  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.473814  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.473844  151207 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:29:45.473895  151207 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:29:45.520433  151207 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:29:45.520461  151207 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:29:45.520472  151207 kubeadm.go:926] updating node { 192.168.39.90 8443 v1.34.0 crio true true} ...
	I0903 23:29:45.520584  151207 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-957460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:29:45.520661  151207 ssh_runner.go:195] Run: crio config
	I0903 23:29:45.572672  151207 cni.go:84] Creating CNI manager for ""
	I0903 23:29:45.572700  151207 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:29:45.572716  151207 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:29:45.572747  151207 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-957460 NodeName:pause-957460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:29:45.572933  151207 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-957460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.90"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:29:45.573025  151207 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:29:45.587509  151207 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:29:45.587583  151207 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:29:45.600483  151207 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0903 23:29:45.623892  151207 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:29:45.644207  151207 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I0903 23:29:45.664580  151207 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0903 23:29:45.668791  151207 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:29:45.845844  151207 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:29:45.870895  151207 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460 for IP: 192.168.39.90
	I0903 23:29:45.870920  151207 certs.go:194] generating shared ca certs ...
	I0903 23:29:45.870936  151207 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:29:45.871121  151207 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:29:45.871183  151207 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:29:45.871197  151207 certs.go:256] generating profile certs ...
	I0903 23:29:45.871284  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/client.key
	I0903 23:29:45.871344  151207 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key.13718f5a
	I0903 23:29:45.871381  151207 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key
	I0903 23:29:45.871484  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:29:45.871510  151207 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:29:45.871520  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:29:45.871541  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:29:45.871565  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:29:45.871602  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:29:45.871661  151207 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:29:45.872248  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:29:45.905809  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:29:46.013449  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:29:46.065527  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:29:46.126258  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0903 23:29:46.218401  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:29:46.317576  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:29:46.405885  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/pause-957460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:29:46.468227  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:29:46.540356  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:29:46.628688  151207 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:29:46.727214  151207 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:29:46.777080  151207 ssh_runner.go:195] Run: openssl version
	I0903 23:29:46.796340  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:29:46.821701  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833705  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.833779  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:29:46.845539  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:29:46.868263  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:29:46.890836  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902613  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.902691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:29:46.915399  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:29:46.936675  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:29:47.040563  151207 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056605  151207 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.056691  151207 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:29:47.072040  151207 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:29:47.101448  151207 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:29:47.111388  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:29:47.125496  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:29:47.142359  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:29:47.157193  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:29:47.169376  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:29:47.180485  151207 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:29:47.191874  151207 kubeadm.go:392] StartCluster: {Name:pause-957460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 Cl
usterName:pause-957460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:29:47.192024  151207 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:29:47.192086  151207 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:29:47.262205  151207 cri.go:89] found id: "10e8e7d7bd3e7aac3752bd071d990274ecb8edc847cf6261efa2c66baf0d994b"
	I0903 23:29:47.262238  151207 cri.go:89] found id: "1b24e6d40da9c16e6c760fabd44817047863d2e5bac5ea60d85bb264100b7c73"
	I0903 23:29:47.262244  151207 cri.go:89] found id: "aa9d14c5b4e3ab0d9200e6db3849d49689c17a66f1418ddc40f9c7abca252cdf"
	I0903 23:29:47.262248  151207 cri.go:89] found id: "2337c9c17a585736200e843c09f9dc0d4ed47cc2d8a8aa8a77f42e9548c11e5e"
	I0903 23:29:47.262252  151207 cri.go:89] found id: "63a5274b0c5bfadea8983b60493d6610cc81c20b75987c71017aafd565687523"
	I0903 23:29:47.262256  151207 cri.go:89] found id: "622bd13cd8cac3d51aa7b0cafd1834f8de52c46ccd69532fe8bf3a6eb4a2e49d"
	I0903 23:29:47.262261  151207 cri.go:89] found id: "bf81fb211da095af4350a78f944ae302c860603d85647c92df059e7bab1bf58b"
	I0903 23:29:47.262266  151207 cri.go:89] found id: "235fbdc3e7ec406c66669bdd536b8030197b6f88152ff1ad09a72dcac8975024"
	I0903 23:29:47.262270  151207 cri.go:89] found id: "188275cc44fc3fba51ff3713eaf778fb8a952b28dbcea50ec838f84764dfebca"
	I0903 23:29:47.262278  151207 cri.go:89] found id: "4897d1fe35dcfa698a0f2777d418a4d07ee29d0345b9b1a7efaea54df6234af0"
	I0903 23:29:47.262282  151207 cri.go:89] found id: ""
	I0903 23:29:47.262343  151207 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-957460 -n pause-957460
helpers_test.go:269: (dbg) Run:  kubectl --context pause-957460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (48.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (278.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m36.171880121s)

                                                
                                                
-- stdout --
	* [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:35:18.039041  161984 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:35:18.039263  161984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:35:18.039272  161984 out.go:374] Setting ErrFile to fd 2...
	I0903 23:35:18.039276  161984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:35:18.039485  161984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:35:18.040093  161984 out.go:368] Setting JSON to false
	I0903 23:35:18.041204  161984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8262,"bootTime":1756934256,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:35:18.041264  161984 start.go:140] virtualization: kvm guest
	I0903 23:35:18.043021  161984 out.go:179] * [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:35:18.044495  161984 notify.go:220] Checking for updates...
	I0903 23:35:18.044515  161984 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:35:18.045888  161984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:35:18.046984  161984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:35:18.048018  161984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:35:18.049067  161984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:35:18.050188  161984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:35:18.051917  161984 config.go:182] Loaded profile config "bridge-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:35:18.052073  161984 config.go:182] Loaded profile config "enable-default-cni-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:35:18.052204  161984 config.go:182] Loaded profile config "flannel-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:35:18.052345  161984 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:35:18.096651  161984 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 23:35:18.097947  161984 start.go:304] selected driver: kvm2
	I0903 23:35:18.097969  161984 start.go:918] validating driver "kvm2" against <nil>
	I0903 23:35:18.097985  161984 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:35:18.099114  161984 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:35:18.099215  161984 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:35:18.122005  161984 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:35:18.122079  161984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:35:18.122349  161984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:35:18.122388  161984 cni.go:84] Creating CNI manager for ""
	I0903 23:35:18.122437  161984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:35:18.122447  161984 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 23:35:18.122528  161984 start.go:348] cluster config:
	{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:35:18.122693  161984 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:35:18.124470  161984 out.go:179] * Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	I0903 23:35:18.125623  161984 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:35:18.125667  161984 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:35:18.125675  161984 cache.go:58] Caching tarball of preloaded images
	I0903 23:35:18.125758  161984 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:35:18.125771  161984 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:35:18.125863  161984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:35:18.125883  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json: {Name:mk71decff7b8a487d7ca47735da709ca1e531539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:18.126032  161984 start.go:360] acquireMachinesLock for old-k8s-version-335468: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:35:18.126062  161984 start.go:364] duration metric: took 16.268µs to acquireMachinesLock for "old-k8s-version-335468"
	I0903 23:35:18.126077  161984 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:35:18.126133  161984 start.go:125] createHost starting for "" (driver="kvm2")
	I0903 23:35:18.127693  161984 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0903 23:35:18.127837  161984 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:35:18.127882  161984 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:35:18.145759  161984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38561
	I0903 23:35:18.146254  161984 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:35:18.146814  161984 main.go:141] libmachine: Using API Version  1
	I0903 23:35:18.146837  161984 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:35:18.147257  161984 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:35:18.147472  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:35:18.147659  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:18.147839  161984 start.go:159] libmachine.API.Create for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:35:18.147866  161984 client.go:168] LocalClient.Create starting
	I0903 23:35:18.147900  161984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem
	I0903 23:35:18.147945  161984 main.go:141] libmachine: Decoding PEM data...
	I0903 23:35:18.147958  161984 main.go:141] libmachine: Parsing certificate...
	I0903 23:35:18.148006  161984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem
	I0903 23:35:18.148034  161984 main.go:141] libmachine: Decoding PEM data...
	I0903 23:35:18.148046  161984 main.go:141] libmachine: Parsing certificate...
	I0903 23:35:18.148065  161984 main.go:141] libmachine: Running pre-create checks...
	I0903 23:35:18.148074  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .PreCreateCheck
	I0903 23:35:18.148438  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:35:18.148896  161984 main.go:141] libmachine: Creating machine...
	I0903 23:35:18.148913  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .Create
	I0903 23:35:18.149067  161984 main.go:141] libmachine: (old-k8s-version-335468) creating KVM machine...
	I0903 23:35:18.149094  161984 main.go:141] libmachine: (old-k8s-version-335468) creating network...
	I0903 23:35:18.150557  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found existing default KVM network
	I0903 23:35:18.152152  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:18.151987  162006 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:87:4f} reservation:<nil>}
	I0903 23:35:18.153262  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:18.153146  162006 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a3:da:20} reservation:<nil>}
	I0903 23:35:18.154416  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:18.154323  162006 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2ed0}
	I0903 23:35:18.154442  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | created network xml: 
	I0903 23:35:18.154458  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | <network>
	I0903 23:35:18.154473  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   <name>mk-old-k8s-version-335468</name>
	I0903 23:35:18.154513  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   <dns enable='no'/>
	I0903 23:35:18.154526  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   
	I0903 23:35:18.154550  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0903 23:35:18.154566  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |     <dhcp>
	I0903 23:35:18.154575  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0903 23:35:18.154583  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |     </dhcp>
	I0903 23:35:18.154591  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   </ip>
	I0903 23:35:18.154597  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG |   
	I0903 23:35:18.154605  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | </network>
	I0903 23:35:18.154613  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | 
	I0903 23:35:18.159527  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | trying to create private KVM network mk-old-k8s-version-335468 192.168.61.0/24...
	I0903 23:35:18.246513  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | private KVM network mk-old-k8s-version-335468 192.168.61.0/24 created
	I0903 23:35:18.246638  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:18.246372  162006 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:35:18.246672  161984 main.go:141] libmachine: (old-k8s-version-335468) setting up store path in /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468 ...
	I0903 23:35:18.246708  161984 main.go:141] libmachine: (old-k8s-version-335468) building disk image from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 23:35:18.246726  161984 main.go:141] libmachine: (old-k8s-version-335468) Downloading /home/jenkins/minikube-integration/21341-109162/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso...
	I0903 23:35:18.572372  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:18.572227  162006 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa...
	I0903 23:35:19.471333  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:19.471163  162006 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/old-k8s-version-335468.rawdisk...
	I0903 23:35:19.471372  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Writing magic tar header
	I0903 23:35:19.471390  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Writing SSH key tar header
	I0903 23:35:19.471404  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:19.471287  162006 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468 ...
	I0903 23:35:19.471460  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468
	I0903 23:35:19.471506  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube/machines
	I0903 23:35:19.471531  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468 (perms=drwx------)
	I0903 23:35:19.471554  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube/machines (perms=drwxr-xr-x)
	I0903 23:35:19.471563  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins/minikube-integration/21341-109162/.minikube (perms=drwxr-xr-x)
	I0903 23:35:19.471577  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins/minikube-integration/21341-109162 (perms=drwxrwxr-x)
	I0903 23:35:19.471591  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0903 23:35:19.471604  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:35:19.471616  161984 main.go:141] libmachine: (old-k8s-version-335468) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0903 23:35:19.471631  161984 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:35:19.471654  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21341-109162
	I0903 23:35:19.471665  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0903 23:35:19.471678  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home/jenkins
	I0903 23:35:19.471689  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | checking permissions on dir: /home
	I0903 23:35:19.471703  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | skipping /home - not owner
	I0903 23:35:19.472753  161984 main.go:141] libmachine: (old-k8s-version-335468) define libvirt domain using xml: 
	I0903 23:35:19.472782  161984 main.go:141] libmachine: (old-k8s-version-335468) <domain type='kvm'>
	I0903 23:35:19.472825  161984 main.go:141] libmachine: (old-k8s-version-335468)   <name>old-k8s-version-335468</name>
	I0903 23:35:19.472853  161984 main.go:141] libmachine: (old-k8s-version-335468)   <memory unit='MiB'>3072</memory>
	I0903 23:35:19.472862  161984 main.go:141] libmachine: (old-k8s-version-335468)   <vcpu>2</vcpu>
	I0903 23:35:19.472874  161984 main.go:141] libmachine: (old-k8s-version-335468)   <features>
	I0903 23:35:19.472937  161984 main.go:141] libmachine: (old-k8s-version-335468)     <acpi/>
	I0903 23:35:19.472964  161984 main.go:141] libmachine: (old-k8s-version-335468)     <apic/>
	I0903 23:35:19.472974  161984 main.go:141] libmachine: (old-k8s-version-335468)     <pae/>
	I0903 23:35:19.472984  161984 main.go:141] libmachine: (old-k8s-version-335468)     
	I0903 23:35:19.472992  161984 main.go:141] libmachine: (old-k8s-version-335468)   </features>
	I0903 23:35:19.473004  161984 main.go:141] libmachine: (old-k8s-version-335468)   <cpu mode='host-passthrough'>
	I0903 23:35:19.473015  161984 main.go:141] libmachine: (old-k8s-version-335468)   
	I0903 23:35:19.473023  161984 main.go:141] libmachine: (old-k8s-version-335468)   </cpu>
	I0903 23:35:19.473028  161984 main.go:141] libmachine: (old-k8s-version-335468)   <os>
	I0903 23:35:19.473034  161984 main.go:141] libmachine: (old-k8s-version-335468)     <type>hvm</type>
	I0903 23:35:19.473040  161984 main.go:141] libmachine: (old-k8s-version-335468)     <boot dev='cdrom'/>
	I0903 23:35:19.473046  161984 main.go:141] libmachine: (old-k8s-version-335468)     <boot dev='hd'/>
	I0903 23:35:19.473052  161984 main.go:141] libmachine: (old-k8s-version-335468)     <bootmenu enable='no'/>
	I0903 23:35:19.473058  161984 main.go:141] libmachine: (old-k8s-version-335468)   </os>
	I0903 23:35:19.473062  161984 main.go:141] libmachine: (old-k8s-version-335468)   <devices>
	I0903 23:35:19.473067  161984 main.go:141] libmachine: (old-k8s-version-335468)     <disk type='file' device='cdrom'>
	I0903 23:35:19.473078  161984 main.go:141] libmachine: (old-k8s-version-335468)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/boot2docker.iso'/>
	I0903 23:35:19.473087  161984 main.go:141] libmachine: (old-k8s-version-335468)       <target dev='hdc' bus='scsi'/>
	I0903 23:35:19.473092  161984 main.go:141] libmachine: (old-k8s-version-335468)       <readonly/>
	I0903 23:35:19.473097  161984 main.go:141] libmachine: (old-k8s-version-335468)     </disk>
	I0903 23:35:19.473103  161984 main.go:141] libmachine: (old-k8s-version-335468)     <disk type='file' device='disk'>
	I0903 23:35:19.473110  161984 main.go:141] libmachine: (old-k8s-version-335468)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0903 23:35:19.473123  161984 main.go:141] libmachine: (old-k8s-version-335468)       <source file='/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/old-k8s-version-335468.rawdisk'/>
	I0903 23:35:19.473130  161984 main.go:141] libmachine: (old-k8s-version-335468)       <target dev='hda' bus='virtio'/>
	I0903 23:35:19.473134  161984 main.go:141] libmachine: (old-k8s-version-335468)     </disk>
	I0903 23:35:19.473141  161984 main.go:141] libmachine: (old-k8s-version-335468)     <interface type='network'>
	I0903 23:35:19.473147  161984 main.go:141] libmachine: (old-k8s-version-335468)       <source network='mk-old-k8s-version-335468'/>
	I0903 23:35:19.473153  161984 main.go:141] libmachine: (old-k8s-version-335468)       <model type='virtio'/>
	I0903 23:35:19.473159  161984 main.go:141] libmachine: (old-k8s-version-335468)     </interface>
	I0903 23:35:19.473165  161984 main.go:141] libmachine: (old-k8s-version-335468)     <interface type='network'>
	I0903 23:35:19.473170  161984 main.go:141] libmachine: (old-k8s-version-335468)       <source network='default'/>
	I0903 23:35:19.473177  161984 main.go:141] libmachine: (old-k8s-version-335468)       <model type='virtio'/>
	I0903 23:35:19.473181  161984 main.go:141] libmachine: (old-k8s-version-335468)     </interface>
	I0903 23:35:19.473186  161984 main.go:141] libmachine: (old-k8s-version-335468)     <serial type='pty'>
	I0903 23:35:19.473194  161984 main.go:141] libmachine: (old-k8s-version-335468)       <target port='0'/>
	I0903 23:35:19.473198  161984 main.go:141] libmachine: (old-k8s-version-335468)     </serial>
	I0903 23:35:19.473203  161984 main.go:141] libmachine: (old-k8s-version-335468)     <console type='pty'>
	I0903 23:35:19.473209  161984 main.go:141] libmachine: (old-k8s-version-335468)       <target type='serial' port='0'/>
	I0903 23:35:19.473214  161984 main.go:141] libmachine: (old-k8s-version-335468)     </console>
	I0903 23:35:19.473218  161984 main.go:141] libmachine: (old-k8s-version-335468)     <rng model='virtio'>
	I0903 23:35:19.473225  161984 main.go:141] libmachine: (old-k8s-version-335468)       <backend model='random'>/dev/random</backend>
	I0903 23:35:19.473237  161984 main.go:141] libmachine: (old-k8s-version-335468)     </rng>
	I0903 23:35:19.473245  161984 main.go:141] libmachine: (old-k8s-version-335468)     
	I0903 23:35:19.473249  161984 main.go:141] libmachine: (old-k8s-version-335468)     
	I0903 23:35:19.473253  161984 main.go:141] libmachine: (old-k8s-version-335468)   </devices>
	I0903 23:35:19.473257  161984 main.go:141] libmachine: (old-k8s-version-335468) </domain>
	I0903 23:35:19.473264  161984 main.go:141] libmachine: (old-k8s-version-335468) 
	I0903 23:35:19.477501  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:12:53:33 in network default
	I0903 23:35:19.478334  161984 main.go:141] libmachine: (old-k8s-version-335468) starting domain...
	I0903 23:35:19.478352  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:19.478359  161984 main.go:141] libmachine: (old-k8s-version-335468) ensuring networks are active...
	I0903 23:35:19.479136  161984 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network default is active
	I0903 23:35:19.479565  161984 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network mk-old-k8s-version-335468 is active
	I0903 23:35:19.480118  161984 main.go:141] libmachine: (old-k8s-version-335468) getting domain XML...
	I0903 23:35:19.480936  161984 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:35:21.134737  161984 main.go:141] libmachine: (old-k8s-version-335468) waiting for IP...
	I0903 23:35:21.135797  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:21.136403  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:21.136511  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:21.136394  162006 retry.go:31] will retry after 215.50994ms: waiting for domain to come up
	I0903 23:35:21.354263  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:21.355062  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:21.355091  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:21.355024  162006 retry.go:31] will retry after 325.723507ms: waiting for domain to come up
	I0903 23:35:21.682934  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:21.683877  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:21.683909  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:21.683847  162006 retry.go:31] will retry after 459.355647ms: waiting for domain to come up
	I0903 23:35:22.144573  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:22.145156  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:22.145236  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:22.145120  162006 retry.go:31] will retry after 503.203879ms: waiting for domain to come up
	I0903 23:35:22.649977  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:22.650621  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:22.650657  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:22.650594  162006 retry.go:31] will retry after 464.720851ms: waiting for domain to come up
	I0903 23:35:23.117574  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:23.118149  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:23.118203  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:23.118108  162006 retry.go:31] will retry after 732.826722ms: waiting for domain to come up
	I0903 23:35:23.852683  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:23.853184  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:23.853214  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:23.853152  162006 retry.go:31] will retry after 781.935962ms: waiting for domain to come up
	I0903 23:35:24.637351  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:24.637878  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:24.637902  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:24.637848  162006 retry.go:31] will retry after 968.285439ms: waiting for domain to come up
	I0903 23:35:25.607379  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:25.607853  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:25.607876  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:25.607821  162006 retry.go:31] will retry after 1.261113415s: waiting for domain to come up
	I0903 23:35:26.870700  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:26.871285  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:26.871312  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:26.871194  162006 retry.go:31] will retry after 1.46273447s: waiting for domain to come up
	I0903 23:35:28.335735  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:28.336266  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:28.336290  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:28.336211  162006 retry.go:31] will retry after 2.185610178s: waiting for domain to come up
	I0903 23:35:30.529470  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:30.529960  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:30.529988  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:30.529875  162006 retry.go:31] will retry after 3.077161977s: waiting for domain to come up
	I0903 23:35:33.608647  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:33.609280  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:33.609418  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:33.609262  162006 retry.go:31] will retry after 4.288810735s: waiting for domain to come up
	I0903 23:35:37.902701  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:37.903190  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:35:37.903218  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:35:37.903142  162006 retry.go:31] will retry after 3.436518575s: waiting for domain to come up
	I0903 23:35:41.342510  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:41.343095  161984 main.go:141] libmachine: (old-k8s-version-335468) found domain IP: 192.168.61.80
	I0903 23:35:41.343123  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has current primary IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:41.343136  161984 main.go:141] libmachine: (old-k8s-version-335468) reserving static IP address...
	I0903 23:35:41.343494  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"} in network mk-old-k8s-version-335468
	I0903 23:35:41.421658  161984 main.go:141] libmachine: (old-k8s-version-335468) reserved static IP address 192.168.61.80 for domain old-k8s-version-335468
	I0903 23:35:41.421683  161984 main.go:141] libmachine: (old-k8s-version-335468) waiting for SSH...
	I0903 23:35:41.421705  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:35:41.424609  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:41.424940  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468
	I0903 23:35:41.424966  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find defined IP address of network mk-old-k8s-version-335468 interface with MAC address 52:54:00:a2:6b:b9
	I0903 23:35:41.425109  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:35:41.425133  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:35:41.425195  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:35:41.425212  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:35:41.425230  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:35:41.429031  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: exit status 255: 
	I0903 23:35:41.429056  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0903 23:35:41.429066  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | command : exit 0
	I0903 23:35:41.429079  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | err     : exit status 255
	I0903 23:35:41.429093  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | output  : 
	I0903 23:35:44.429588  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:35:44.432460  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.432863  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:44.432891  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.433079  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:35:44.433106  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:35:44.433135  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:35:44.433148  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:35:44.433164  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:35:44.565742  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: <nil>: 
	I0903 23:35:44.566039  161984 main.go:141] libmachine: (old-k8s-version-335468) KVM machine creation complete
	I0903 23:35:44.566365  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:35:44.566992  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:44.567168  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:44.567347  161984 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0903 23:35:44.567366  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetState
	I0903 23:35:44.568762  161984 main.go:141] libmachine: Detecting operating system of created instance...
	I0903 23:35:44.568775  161984 main.go:141] libmachine: Waiting for SSH to be available...
	I0903 23:35:44.568780  161984 main.go:141] libmachine: Getting to WaitForSSH function...
	I0903 23:35:44.568786  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:44.571280  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.571676  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:44.571710  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.571860  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:44.572051  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.572222  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.572379  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:44.572555  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:44.572927  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:44.572949  161984 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0903 23:35:44.688743  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:35:44.688767  161984 main.go:141] libmachine: Detecting the provisioner...
	I0903 23:35:44.688780  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:44.691495  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.691822  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:44.691846  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.692029  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:44.692202  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.692372  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.692490  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:44.692633  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:44.692877  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:44.692891  161984 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0903 23:35:44.811000  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0903 23:35:44.811087  161984 main.go:141] libmachine: found compatible host: buildroot
	I0903 23:35:44.811102  161984 main.go:141] libmachine: Provisioning with buildroot...
	I0903 23:35:44.811117  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:35:44.811397  161984 buildroot.go:166] provisioning hostname "old-k8s-version-335468"
	I0903 23:35:44.811435  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:35:44.811684  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:44.814995  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.815430  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:44.815464  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.815628  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:44.815808  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.815979  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.816113  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:44.816275  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:44.816513  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:44.816527  161984 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-335468 && echo "old-k8s-version-335468" | sudo tee /etc/hostname
	I0903 23:35:44.955548  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-335468
	
	I0903 23:35:44.955576  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:44.959264  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.959711  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:44.959742  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:44.959989  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:44.960178  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.960405  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:44.960578  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:44.960830  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:44.961179  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:44.961212  161984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-335468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-335468/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-335468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:35:45.092515  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:35:45.092547  161984 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:35:45.092576  161984 buildroot.go:174] setting up certificates
	I0903 23:35:45.092589  161984 provision.go:84] configureAuth start
	I0903 23:35:45.092603  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:35:45.092865  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:35:45.095682  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.096012  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:45.096042  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.096145  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:45.098562  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.098838  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:45.098857  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.099038  161984 provision.go:143] copyHostCerts
	I0903 23:35:45.099103  161984 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:35:45.099124  161984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:35:45.099175  161984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:35:45.099277  161984 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:35:45.099285  161984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:35:45.099308  161984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:35:45.099366  161984 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:35:45.099372  161984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:35:45.099389  161984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:35:45.099436  161984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-335468 san=[127.0.0.1 192.168.61.80 localhost minikube old-k8s-version-335468]
	I0903 23:35:45.841582  161984 provision.go:177] copyRemoteCerts
	I0903 23:35:45.841651  161984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:35:45.841683  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:45.843959  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.844302  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:45.844345  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:45.844514  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:45.844723  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:45.844882  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:45.845021  161984 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:35:45.932385  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0903 23:35:45.958833  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0903 23:35:45.985131  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:35:46.010921  161984 provision.go:87] duration metric: took 918.314039ms to configureAuth
	I0903 23:35:46.010959  161984 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:35:46.011121  161984 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:35:46.011198  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:46.014389  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.014788  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.014820  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.014975  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:46.015185  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.015342  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.015501  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:46.015694  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:46.016030  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:46.016060  161984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:35:46.256083  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:35:46.256112  161984 main.go:141] libmachine: Checking connection to Docker...
	I0903 23:35:46.256122  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetURL
	I0903 23:35:46.257426  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | using libvirt version 6000000
	I0903 23:35:46.260034  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.260375  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.260409  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.260600  161984 main.go:141] libmachine: Docker is up and running!
	I0903 23:35:46.260610  161984 main.go:141] libmachine: Reticulating splines...
	I0903 23:35:46.260618  161984 client.go:171] duration metric: took 28.112742156s to LocalClient.Create
	I0903 23:35:46.260643  161984 start.go:167] duration metric: took 28.112806753s to libmachine.API.Create "old-k8s-version-335468"
	I0903 23:35:46.260656  161984 start.go:293] postStartSetup for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:35:46.260668  161984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:35:46.260687  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:46.260906  161984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:35:46.260935  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:46.263256  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.263575  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.263600  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.263774  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:46.263974  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.264144  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:46.264276  161984 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:35:46.353670  161984 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:35:46.358320  161984 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:35:46.358354  161984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:35:46.358472  161984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:35:46.358551  161984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:35:46.358636  161984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:35:46.370597  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:35:46.398305  161984 start.go:296] duration metric: took 137.630635ms for postStartSetup
	I0903 23:35:46.398371  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:35:46.398977  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:35:46.401626  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.401992  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.402024  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.402270  161984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:35:46.402474  161984 start.go:128] duration metric: took 28.276330151s to createHost
	I0903 23:35:46.402504  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:46.405045  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.405408  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.405436  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.405603  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:46.405771  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.405902  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.406065  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:46.406218  161984 main.go:141] libmachine: Using SSH client type: native
	I0903 23:35:46.406438  161984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:35:46.406452  161984 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:35:46.522589  161984 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942546.494399912
	
	I0903 23:35:46.522614  161984 fix.go:216] guest clock: 1756942546.494399912
	I0903 23:35:46.522625  161984 fix.go:229] Guest: 2025-09-03 23:35:46.494399912 +0000 UTC Remote: 2025-09-03 23:35:46.402488166 +0000 UTC m=+28.406569168 (delta=91.911746ms)
	I0903 23:35:46.522650  161984 fix.go:200] guest clock delta is within tolerance: 91.911746ms
	I0903 23:35:46.522656  161984 start.go:83] releasing machines lock for "old-k8s-version-335468", held for 28.39658803s
	I0903 23:35:46.522690  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:46.522945  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:35:46.525924  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.526323  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.526354  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.526529  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:46.527061  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:46.527259  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:35:46.527354  161984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:35:46.527411  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:46.527525  161984 ssh_runner.go:195] Run: cat /version.json
	I0903 23:35:46.527556  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:35:46.530228  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.530321  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.530671  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.530698  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.530725  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:46.530742  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:46.530880  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:46.530980  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:35:46.531045  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.531118  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:35:46.531195  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:46.531255  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:35:46.531342  161984 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:35:46.531392  161984 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:35:46.626959  161984 ssh_runner.go:195] Run: systemctl --version
	I0903 23:35:46.655744  161984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:35:46.815441  161984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:35:46.822187  161984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:35:46.822262  161984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:35:46.841163  161984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:35:46.841199  161984 start.go:495] detecting cgroup driver to use...
	I0903 23:35:46.841272  161984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:35:46.861640  161984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:35:46.879410  161984 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:35:46.879468  161984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:35:46.895697  161984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:35:46.912785  161984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:35:47.071316  161984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:35:47.232173  161984 docker.go:234] disabling docker service ...
	I0903 23:35:47.232236  161984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:35:47.248820  161984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:35:47.264046  161984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:35:47.478302  161984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:35:47.625709  161984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:35:47.648440  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:35:47.672766  161984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:35:47.672835  161984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:35:47.684719  161984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:35:47.684799  161984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:35:47.698464  161984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:35:47.710027  161984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:35:47.721503  161984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:35:47.735689  161984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:35:47.748060  161984 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:35:47.748125  161984 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:35:47.769411  161984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:35:47.783760  161984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:35:47.932519  161984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:35:48.067373  161984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:35:48.067445  161984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:35:48.073099  161984 start.go:563] Will wait 60s for crictl version
	I0903 23:35:48.073175  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:48.077248  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:35:48.116926  161984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:35:48.117028  161984 ssh_runner.go:195] Run: crio --version
	I0903 23:35:48.151226  161984 ssh_runner.go:195] Run: crio --version
	I0903 23:35:48.184646  161984 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:35:48.185748  161984 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:35:48.188853  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:48.189235  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:35:36 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:35:48.189268  161984 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:35:48.189523  161984 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:35:48.194168  161984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:35:48.210749  161984 kubeadm.go:875] updating cluster {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:35:48.210888  161984 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:35:48.210941  161984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:35:48.254517  161984 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:35:48.254645  161984 ssh_runner.go:195] Run: which lz4
	I0903 23:35:48.259258  161984 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:35:48.263941  161984 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:35:48.263978  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:35:49.997118  161984 crio.go:462] duration metric: took 1.737888543s to copy over tarball
	I0903 23:35:49.997217  161984 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:35:52.228611  161984 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.231356681s)
	I0903 23:35:52.228649  161984 crio.go:469] duration metric: took 2.231491053s to extract the tarball
	I0903 23:35:52.228660  161984 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:35:52.277278  161984 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:35:52.335808  161984 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:35:52.335833  161984 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:35:52.335913  161984 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:35:52.335933  161984 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.335951  161984 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.335974  161984 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.335941  161984 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.335925  161984 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.336039  161984 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.335992  161984 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:35:52.337889  161984 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.337995  161984 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.338167  161984 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.338308  161984 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.338448  161984 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.338456  161984 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.338567  161984 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:35:52.339017  161984 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:35:52.498700  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.499272  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.501972  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.503187  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.509248  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.519169  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.530844  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:35:52.639948  161984 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:35:52.640004  161984 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.640065  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.654557  161984 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:35:52.654604  161984 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.654658  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.693932  161984 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:35:52.693968  161984 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:35:52.693980  161984 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:35:52.693988  161984 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.694001  161984 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.694001  161984 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.694039  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.694043  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.694111  161984 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:35:52.694043  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.694138  161984 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.694173  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.694184  161984 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:35:52.694207  161984 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:35:52.694216  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.694235  161984 ssh_runner.go:195] Run: which crictl
	I0903 23:35:52.694255  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.707836  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.769116  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.769176  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.769254  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:35:52.769307  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:52.769350  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.769402  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.769441  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.908939  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:35:52.908962  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:52.928670  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:35:52.928670  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:52.928779  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:35:52.928802  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:35:52.928854  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:35:53.042460  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:35:53.042531  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:35:53.073125  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:35:53.078185  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:35:53.085357  161984 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:35:53.085401  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:35:53.085419  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:35:53.165579  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:35:53.165597  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:35:53.165680  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:35:53.165685  161984 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:35:53.687766  161984 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:35:53.835237  161984 cache_images.go:93] duration metric: took 1.499383638s to LoadCachedImages
	W0903 23:35:53.835353  161984 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0903 23:35:53.835373  161984 kubeadm.go:926] updating node { 192.168.61.80 8443 v1.20.0 crio true true} ...
	I0903 23:35:53.835523  161984 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-335468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:35:53.835626  161984 ssh_runner.go:195] Run: crio config
	I0903 23:35:53.883543  161984 cni.go:84] Creating CNI manager for ""
	I0903 23:35:53.883573  161984 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:35:53.883587  161984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:35:53.883615  161984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-335468 NodeName:old-k8s-version-335468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:35:53.883732  161984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-335468"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:35:53.883800  161984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:35:53.896232  161984 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:35:53.896335  161984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:35:53.908526  161984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0903 23:35:53.928837  161984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:35:53.948156  161984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0903 23:35:53.969691  161984 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0903 23:35:53.973826  161984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:35:53.988574  161984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:35:54.128696  161984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:35:54.152029  161984 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468 for IP: 192.168.61.80
	I0903 23:35:54.152051  161984 certs.go:194] generating shared ca certs ...
	I0903 23:35:54.152073  161984 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.152264  161984 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:35:54.152325  161984 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:35:54.152340  161984 certs.go:256] generating profile certs ...
	I0903 23:35:54.152409  161984 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key
	I0903 23:35:54.152426  161984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.crt with IP's: []
	I0903 23:35:54.432126  161984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.crt ...
	I0903 23:35:54.432163  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.crt: {Name:mk97bcc2161054214f7351ac57a3df936aacf2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.432364  161984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key ...
	I0903 23:35:54.432383  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key: {Name:mk119bfb198a426ab0ddd737eabb72691c01118a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.432500  161984 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629
	I0903 23:35:54.432521  161984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt.f2828629 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.80]
	I0903 23:35:54.579315  161984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt.f2828629 ...
	I0903 23:35:54.579356  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt.f2828629: {Name:mk45e0f97c308fef43eb67c07e49fcb7f01ef5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.602800  161984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629 ...
	I0903 23:35:54.602845  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629: {Name:mkc4601cd80860574498e873f51cd053071b0ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.602988  161984 certs.go:381] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt.f2828629 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt
	I0903 23:35:54.603094  161984 certs.go:385] copying /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629 -> /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key
	I0903 23:35:54.603177  161984 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key
	I0903 23:35:54.603200  161984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt with IP's: []
	I0903 23:35:54.939581  161984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt ...
	I0903 23:35:54.939619  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt: {Name:mk27140b43636e95bd5f07f645dee0380f0e7b97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.939782  161984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key ...
	I0903 23:35:54.939795  161984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key: {Name:mkb9ce088cbb72486f21fc6a8c8ead81cc2cfe15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:35:54.939992  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:35:54.940032  161984 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:35:54.940042  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:35:54.940062  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:35:54.940089  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:35:54.940127  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:35:54.940193  161984 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:35:54.940793  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:35:54.982024  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:35:55.034254  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:35:55.083354  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:35:55.127481  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:35:55.169437  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:35:55.210349  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:35:55.248016  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:35:55.279738  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:35:55.317088  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:35:55.345849  161984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:35:55.379441  161984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:35:55.402044  161984 ssh_runner.go:195] Run: openssl version
	I0903 23:35:55.409029  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:35:55.422144  161984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:35:55.427266  161984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:35:55.427344  161984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:35:55.434648  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:35:55.447663  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:35:55.464802  161984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:35:55.471696  161984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:35:55.471772  161984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:35:55.481094  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:35:55.498303  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:35:55.515458  161984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:35:55.522357  161984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:35:55.522422  161984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:35:55.530082  161984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:35:55.544325  161984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:35:55.551045  161984 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:35:55.551139  161984 kubeadm.go:392] StartCluster: {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:35:55.551248  161984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:35:55.551341  161984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:35:55.601863  161984 cri.go:89] found id: ""
	I0903 23:35:55.601936  161984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:35:55.617084  161984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:35:55.629224  161984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:35:55.644534  161984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:35:55.644561  161984 kubeadm.go:157] found existing configuration files:
	
	I0903 23:35:55.644657  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:35:55.658365  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:35:55.658436  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:35:55.670122  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:35:55.681005  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:35:55.681076  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:35:55.692222  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:35:55.705784  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:35:55.705866  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:35:55.719328  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:35:55.732750  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:35:55.732822  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:35:55.749641  161984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:35:56.005804  161984 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:37:54.681456  161984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:37:54.681650  161984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:37:54.683536  161984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:37:54.683603  161984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:37:54.683731  161984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:37:54.683818  161984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:37:54.683896  161984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:37:54.683949  161984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:37:54.685544  161984 out.go:252]   - Generating certificates and keys ...
	I0903 23:37:54.685614  161984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:37:54.685694  161984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:37:54.685770  161984 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:37:54.685824  161984 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:37:54.685877  161984 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:37:54.685925  161984 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:37:54.685971  161984 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:37:54.686084  161984 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	I0903 23:37:54.686134  161984 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:37:54.686248  161984 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	I0903 23:37:54.686338  161984 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:37:54.686430  161984 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:37:54.686500  161984 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:37:54.686546  161984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:37:54.686594  161984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:37:54.686640  161984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:37:54.686695  161984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:37:54.686739  161984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:37:54.686866  161984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:37:54.686952  161984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:37:54.687011  161984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:37:54.687115  161984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:37:54.688392  161984 out.go:252]   - Booting up control plane ...
	I0903 23:37:54.688490  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:37:54.688592  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:37:54.688696  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:37:54.688796  161984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:37:54.688973  161984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:37:54.689039  161984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:37:54.689135  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:37:54.689412  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:37:54.689491  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:37:54.689756  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:37:54.689830  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:37:54.689991  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:37:54.690052  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:37:54.690207  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:37:54.690281  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:37:54.690494  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:37:54.690512  161984 kubeadm.go:310] 
	I0903 23:37:54.690558  161984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:37:54.690594  161984 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:37:54.690605  161984 kubeadm.go:310] 
	I0903 23:37:54.690634  161984 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:37:54.690664  161984 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:37:54.690768  161984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:37:54.690780  161984 kubeadm.go:310] 
	I0903 23:37:54.690871  161984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:37:54.690902  161984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:37:54.690941  161984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:37:54.690950  161984 kubeadm.go:310] 
	I0903 23:37:54.691037  161984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:37:54.691103  161984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:37:54.691109  161984 kubeadm.go:310] 
	I0903 23:37:54.691192  161984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:37:54.691267  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:37:54.691335  161984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:37:54.691397  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:37:54.691422  161984 kubeadm.go:310] 
	W0903 23:37:54.691524  161984 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-335468] and IPs [192.168.61.80 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:37:54.691562  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:37:56.968074  161984 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.276485344s)
	I0903 23:37:56.968162  161984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:37:56.983462  161984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:37:56.994565  161984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:37:56.994588  161984 kubeadm.go:157] found existing configuration files:
	
	I0903 23:37:56.994630  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:37:57.004478  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:37:57.004542  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:37:57.015094  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:37:57.025478  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:37:57.025544  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:37:57.036975  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:37:57.046743  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:37:57.046797  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:37:57.057221  161984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:37:57.067109  161984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:37:57.067161  161984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:37:57.077629  161984 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:37:57.320966  161984 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:39:53.350090  161984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:39:53.350225  161984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:39:53.352239  161984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:39:53.352325  161984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:39:53.352429  161984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:39:53.352559  161984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:39:53.352700  161984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:39:53.352785  161984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:39:53.353884  161984 out.go:252]   - Generating certificates and keys ...
	I0903 23:39:53.354002  161984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:39:53.354096  161984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:39:53.354204  161984 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:39:53.354294  161984 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:39:53.354408  161984 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:39:53.354488  161984 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:39:53.354571  161984 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:39:53.354691  161984 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:39:53.354803  161984 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:39:53.354908  161984 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:39:53.354963  161984 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:39:53.355043  161984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:39:53.355116  161984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:39:53.355189  161984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:39:53.355279  161984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:39:53.355378  161984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:39:53.355503  161984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:39:53.355595  161984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:39:53.355639  161984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:39:53.355708  161984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:39:53.356804  161984 out.go:252]   - Booting up control plane ...
	I0903 23:39:53.356945  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:39:53.357090  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:39:53.357200  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:39:53.357322  161984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:39:53.357557  161984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:39:53.357628  161984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:39:53.357717  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.357955  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358039  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358267  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358357  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358607  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358690  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358948  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359032  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.359346  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359365  161984 kubeadm.go:310] 
	I0903 23:39:53.359417  161984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:39:53.359470  161984 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:39:53.359476  161984 kubeadm.go:310] 
	I0903 23:39:53.359539  161984 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:39:53.359578  161984 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:39:53.359718  161984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:39:53.359727  161984 kubeadm.go:310] 
	I0903 23:39:53.359871  161984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:39:53.359916  161984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:39:53.359961  161984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:39:53.359968  161984 kubeadm.go:310] 
	I0903 23:39:53.360175  161984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:39:53.360307  161984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:39:53.360316  161984 kubeadm.go:310] 
	I0903 23:39:53.360461  161984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:39:53.360565  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:39:53.360667  161984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:39:53.360764  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:39:53.360841  161984 kubeadm.go:394] duration metric: took 3m57.809707974s to StartCluster
	I0903 23:39:53.360890  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:39:53.360954  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:39:53.361022  161984 kubeadm.go:310] 
	I0903 23:39:53.423382  161984 cri.go:89] found id: ""
	I0903 23:39:53.423411  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.423422  161984 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:39:53.423430  161984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:39:53.423488  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:39:53.479608  161984 cri.go:89] found id: ""
	I0903 23:39:53.479645  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.479659  161984 logs.go:284] No container was found matching "etcd"
	I0903 23:39:53.479667  161984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:39:53.479736  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:39:53.528071  161984 cri.go:89] found id: ""
	I0903 23:39:53.528107  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.528121  161984 logs.go:284] No container was found matching "coredns"
	I0903 23:39:53.528131  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:39:53.528202  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:39:53.573292  161984 cri.go:89] found id: ""
	I0903 23:39:53.573335  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.573348  161984 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:39:53.573361  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:39:53.573461  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:39:53.620296  161984 cri.go:89] found id: ""
	I0903 23:39:53.620326  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.620334  161984 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:39:53.620340  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:39:53.620395  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:39:53.671465  161984 cri.go:89] found id: ""
	I0903 23:39:53.671500  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.671512  161984 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:39:53.671521  161984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:39:53.671600  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:39:53.726259  161984 cri.go:89] found id: ""
	I0903 23:39:53.726297  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.726320  161984 logs.go:284] No container was found matching "kindnet"
	I0903 23:39:53.726335  161984 logs.go:123] Gathering logs for kubelet ...
	I0903 23:39:53.726350  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:39:53.803144  161984 logs.go:123] Gathering logs for dmesg ...
	I0903 23:39:53.803236  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:39:53.825585  161984 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:39:53.825628  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:39:53.938313  161984 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:39:53.938350  161984 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:39:53.938368  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:39:54.079732  161984 logs.go:123] Gathering logs for container status ...
	I0903 23:39:54.079785  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0903 23:39:54.144894  161984 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:39:54.144973  161984 out.go:285] * 
	* 
	W0903 23:39:54.145064  161984 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.145083  161984 out.go:285] * 
	* 
	W0903 23:39:54.147493  161984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:39:54.150778  161984 out.go:203] 
	W0903 23:39:54.151952  161984 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.152049  161984 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:39:54.152109  161984 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:39:54.153719  161984 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (333.895807ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:39:54.569192  168858 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
E0903 23:39:54.713168  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status containerd --all --full --no-pager                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │                     │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat containerd --no-pager                                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /lib/systemd/system/containerd.service                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /etc/containerd/config.toml                                                                                                      │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo containerd config dump                                                                                                               │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status crio --all --full --no-pager                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat crio --no-pager                                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                              │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo crio config                                                                                                                          │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p enable-default-cni-380966                                                                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p disable-driver-mounts-005091                                                                                                                                        │ disable-driver-mounts-005091 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-434043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p no-preload-434043 --alsologtostderr -v=3                                                                                                                            │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:38 UTC │
	│ addons  │ enable metrics-server -p embed-certs-088493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                               │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p embed-certs-088493 --alsologtostderr -v=3                                                                                                                           │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-799704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                     │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p default-k8s-diff-port-799704 --alsologtostderr -v=3                                                                                                                 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p no-preload-434043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                           │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:38 UTC │
	│ start   │ -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                  │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-088493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                          │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-799704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:39:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:39:31.271818  168525 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:39:31.272050  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272058  168525 out.go:374] Setting ErrFile to fd 2...
	I0903 23:39:31.272062  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272279  168525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:39:31.272813  168525 out.go:368] Setting JSON to false
	I0903 23:39:31.273874  168525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8515,"bootTime":1756934256,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:39:31.273940  168525 start.go:140] virtualization: kvm guest
	I0903 23:39:31.275828  168525 out.go:179] * [default-k8s-diff-port-799704] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:39:31.277406  168525 notify.go:220] Checking for updates...
	I0903 23:39:31.278829  168525 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:39:31.280177  168525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:39:31.281537  168525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:31.282646  168525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:39:31.283774  168525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:39:31.284974  168525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:39:31.286724  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:31.287351  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.287440  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.308970  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0903 23:39:31.309860  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.310730  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.310751  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.311414  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.311676  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.311969  168525 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:39:31.312450  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.312503  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.333553  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0903 23:39:31.334226  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.334781  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.334799  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.335144  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.335265  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.388196  168525 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:39:31.389355  168525 start.go:304] selected driver: kvm2
	I0903 23:39:31.389381  168525 start.go:918] validating driver "kvm2" against &{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.389764  168525 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:39:31.391092  168525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.391304  168525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:39:31.418651  168525 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:39:31.419224  168525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.419280  168525 cni.go:84] Creating CNI manager for ""
	I0903 23:39:31.419338  168525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:31.419383  168525 start.go:348] cluster config:
	{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.419512  168525 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.421091  168525 out.go:179] * Starting "default-k8s-diff-port-799704" primary control-plane node in "default-k8s-diff-port-799704" cluster
	I0903 23:39:31.422103  168525 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:39:31.422147  168525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:39:31.422156  168525 cache.go:58] Caching tarball of preloaded images
	I0903 23:39:31.422278  168525 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:39:31.422293  168525 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:39:31.422425  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:31.422671  168525 start.go:360] acquireMachinesLock for default-k8s-diff-port-799704: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:39:31.422720  168525 start.go:364] duration metric: took 26.407µs to acquireMachinesLock for "default-k8s-diff-port-799704"
	I0903 23:39:31.422741  168525 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:39:31.422748  168525 fix.go:54] fixHost starting: 
	I0903 23:39:31.423078  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.423117  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.441527  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0903 23:39:31.442203  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.442786  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.442812  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.443215  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.443398  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.443541  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetState
	I0903 23:39:31.445456  168525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799704: state=Stopped err=<nil>
	I0903 23:39:31.445508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	W0903 23:39:31.449565  168525 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:39:30.924315  167951 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:30.924344  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:30.924364  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.925334  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:30.925362  167951 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:30.925405  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.928751  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.929980  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930221  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.930285  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930682  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.930861  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.931062  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.931098  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.931116  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.931175  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.932066  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.932251  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.932469  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.932671  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.933250  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.933904  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.933932  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.937721  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.938011  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.938313  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.938593  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.942958  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0903 23:39:30.943534  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.944030  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.944053  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.944469  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.945591  167951 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:30.949659  167951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:30.970235  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0903 23:39:30.970997  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.971694  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.971723  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.972120  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.972343  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetState
	I0903 23:39:30.974525  167951 main.go:141] libmachine: (no-preload-434043) Calling .DriverName
	I0903 23:39:30.974767  167951 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:30.974786  167951 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:30.974806  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.978640  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979150  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.979183  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979349  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.979545  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.979734  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.979898  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:31.130703  167951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:31.167066  167951 node_ready.go:35] waiting up to 6m0s for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174901  167951 node_ready.go:49] node "no-preload-434043" is "Ready"
	I0903 23:39:31.174933  167951 node_ready.go:38] duration metric: took 7.827583ms for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174948  167951 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:31.174996  167951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.209527  167951 api_server.go:72] duration metric: took 516.97608ms to wait for apiserver process to appear ...
	I0903 23:39:31.209554  167951 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:31.209577  167951 api_server.go:253] Checking apiserver healthz at https://192.168.72.145:8443/healthz ...
	I0903 23:39:31.218555  167951 api_server.go:279] https://192.168.72.145:8443/healthz returned 200:
	ok
	I0903 23:39:31.221061  167951 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:31.221085  167951 api_server.go:131] duration metric: took 11.521702ms to wait for apiserver health ...
	I0903 23:39:31.221095  167951 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:31.228196  167951 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:31.228233  167951 system_pods.go:61] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.228243  167951 system_pods.go:61] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.228253  167951 system_pods.go:61] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.228262  167951 system_pods.go:61] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.228268  167951 system_pods.go:61] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.228279  167951 system_pods.go:61] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.228287  167951 system_pods.go:61] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.228301  167951 system_pods.go:61] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.228313  167951 system_pods.go:74] duration metric: took 7.210776ms to wait for pod list to return data ...
	I0903 23:39:31.228326  167951 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:31.234005  167951 default_sa.go:45] found service account: "default"
	I0903 23:39:31.234030  167951 default_sa.go:55] duration metric: took 5.694551ms for default service account to be created ...
	I0903 23:39:31.234042  167951 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:31.239296  167951 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:31.239329  167951 system_pods.go:89] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.239340  167951 system_pods.go:89] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.239351  167951 system_pods.go:89] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.239362  167951 system_pods.go:89] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.239371  167951 system_pods.go:89] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.239384  167951 system_pods.go:89] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.239394  167951 system_pods.go:89] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.239405  167951 system_pods.go:89] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.239413  167951 system_pods.go:126] duration metric: took 5.365177ms to wait for k8s-apps to be running ...
	I0903 23:39:31.239425  167951 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:31.239473  167951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:31.292169  167951 system_svc.go:56] duration metric: took 52.735418ms WaitForService to wait for kubelet
	I0903 23:39:31.292202  167951 kubeadm.go:578] duration metric: took 599.654473ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.292225  167951 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:31.298898  167951 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:31.298922  167951 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:31.298936  167951 node_conditions.go:105] duration metric: took 6.70535ms to run NodePressure ...
	I0903 23:39:31.298952  167951 start.go:241] waiting for startup goroutines ...
	I0903 23:39:31.319927  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:31.319948  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:31.325067  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:31.325090  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:31.329142  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:31.347147  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:31.409588  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:31.409615  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:31.411804  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:31.411826  167951 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:31.497017  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.497047  167951 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:31.505080  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:31.505110  167951 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:31.564683  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.568463  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:31.568495  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:31.636504  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:31.636548  167951 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:31.712523  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:31.712560  167951 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:31.768671  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:31.768718  167951 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:31.852511  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:31.852556  167951 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:31.933535  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:31.933572  167951 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:32.030695  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:35.006492  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.659296879s)
	I0903 23:39:35.006576  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.006592  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.006963  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.006986  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.006998  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.007008  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.007538  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.007589  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.007620  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.010661  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.681478522s)
	I0903 23:39:35.010699  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.010709  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.011031  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.011053  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.011063  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.011072  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.012729  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.012763  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.012780  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.093772  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.529031226s)
	I0903 23:39:35.093830  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.093846  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.094207  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.094235  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.094246  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.094254  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098319  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.098337  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098358  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.098371  167951 addons.go:479] Verifying addon metrics-server=true in "no-preload-434043"
	I0903 23:39:35.098550  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.098568  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098881  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098898  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.294568  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.263818135s)
	I0903 23:39:35.294653  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.294676  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.295105  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.295130  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.295140  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.295149  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.297127  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.297151  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.297172  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.298897  167951 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-434043 addons enable metrics-server
	
	I0903 23:39:35.300309  167951 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0903 23:39:30.569160  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:39:30.585799  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590817  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590881  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.598100  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:39:30.611138  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:39:30.626975  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.631962  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.632013  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.639457  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:39:30.652349  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:39:30.669722  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676323  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676391  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.684739  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:39:30.698776  168184 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:39:30.705787  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:39:30.715596  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:39:30.723820  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:39:30.734268  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:39:30.751209  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:39:30.769986  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:39:30.779742  168184 kubeadm.go:392] StartCluster: {Name:embed-certs-088493 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:embed-certs-088493 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:30.779870  168184 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:39:30.779944  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.826700  168184 cri.go:89] found id: ""
	I0903 23:39:30.826791  168184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:39:30.843146  168184 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:39:30.843174  168184 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:39:30.843233  168184 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:39:30.856578  168184 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:39:30.857287  168184 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-088493" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:30.857752  168184 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-088493" cluster setting kubeconfig missing "embed-certs-088493" context setting]
	I0903 23:39:30.858340  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:30.859693  168184 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:39:30.872955  168184 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.143
	I0903 23:39:30.873001  168184 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:39:30.873018  168184 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:39:30.873080  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.937819  168184 cri.go:89] found id: ""
	I0903 23:39:30.937898  168184 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:39:30.970391  168184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:39:30.985618  168184 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:39:30.985641  168184 kubeadm.go:157] found existing configuration files:
	
	I0903 23:39:30.985702  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:39:30.997473  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:39:30.997551  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:39:31.011825  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:39:31.026448  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:39:31.026510  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:39:31.039622  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.051294  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:39:31.051360  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.065244  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:39:31.077889  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:39:31.077952  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:39:31.093981  168184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:39:31.108296  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:31.176874  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:32.823767  168184 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.646847779s)
	I0903 23:39:32.823806  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.102206  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.185673  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.256402  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:33.256504  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:33.757483  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.256629  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.756682  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.257560  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.451460  168525 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-799704" ...
	I0903 23:39:31.451487  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .Start
	I0903 23:39:31.451677  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) starting domain...
	I0903 23:39:31.451780  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) ensuring networks are active...
	I0903 23:39:31.452685  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network default is active
	I0903 23:39:31.453151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network mk-default-k8s-diff-port-799704 is active
	I0903 23:39:31.453750  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) getting domain XML...
	I0903 23:39:31.454639  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) creating domain...
	I0903 23:39:32.850704  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for IP...
	I0903 23:39:32.851600  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852214  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852359  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:32.852203  168561 retry.go:31] will retry after 194.562879ms: waiting for domain to come up
	I0903 23:39:33.049200  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049910  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049989  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.049872  168561 retry.go:31] will retry after 346.789216ms: waiting for domain to come up
	I0903 23:39:33.398907  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399505  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399547  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.399469  168561 retry.go:31] will retry after 396.68152ms: waiting for domain to come up
	I0903 23:39:33.798263  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799050  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799087  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.798998  168561 retry.go:31] will retry after 388.322823ms: waiting for domain to come up
	I0903 23:39:34.188660  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189376  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189482  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.189334  168561 retry.go:31] will retry after 742.14172ms: waiting for domain to come up
	I0903 23:39:34.932960  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933713  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.933579  168561 retry.go:31] will retry after 698.598056ms: waiting for domain to come up
	I0903 23:39:35.634753  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635481  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:35.635369  168561 retry.go:31] will retry after 956.852118ms: waiting for domain to come up
	I0903 23:39:35.301402  167951 addons.go:514] duration metric: took 4.608814093s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0903 23:39:35.301452  167951 start.go:246] waiting for cluster config update ...
	I0903 23:39:35.301470  167951 start.go:255] writing updated cluster config ...
	I0903 23:39:35.301784  167951 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:35.306947  167951 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:35.311995  167951 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.322196  167951 pod_ready.go:94] pod "coredns-66bc5c9577-z2s2p" is "Ready"
	I0903 23:39:35.322232  167951 pod_ready.go:86] duration metric: took 10.20611ms for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.327157  167951 pod_ready.go:83] waiting for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	W0903 23:39:37.336026  167951 pod_ready.go:104] pod "etcd-no-preload-434043" is not "Ready", error: <nil>
	I0903 23:39:38.836063  167951 pod_ready.go:94] pod "etcd-no-preload-434043" is "Ready"
	I0903 23:39:38.836099  167951 pod_ready.go:86] duration metric: took 3.508912099s for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.844005  167951 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.851465  167951 pod_ready.go:94] pod "kube-apiserver-no-preload-434043" is "Ready"
	I0903 23:39:38.851496  167951 pod_ready.go:86] duration metric: took 7.457768ms for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.853909  167951 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.859802  167951 pod_ready.go:94] pod "kube-controller-manager-no-preload-434043" is "Ready"
	I0903 23:39:38.859824  167951 pod_ready.go:86] duration metric: took 5.889234ms for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.863186  167951 pod_ready.go:83] waiting for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.113115  167951 pod_ready.go:94] pod "kube-proxy-lf7rz" is "Ready"
	I0903 23:39:39.113155  167951 pod_ready.go:86] duration metric: took 249.948168ms for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.315739  167951 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712333  167951 pod_ready.go:94] pod "kube-scheduler-no-preload-434043" is "Ready"
	I0903 23:39:39.712376  167951 pod_ready.go:86] duration metric: took 396.599596ms for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712391  167951 pod_ready.go:40] duration metric: took 4.405411155s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:39.778245  167951 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:39:39.779595  167951 out.go:179] * Done! kubectl is now configured to use "no-preload-434043" cluster and "default" namespace by default
	I0903 23:39:35.756635  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.795249  168184 api_server.go:72] duration metric: took 2.538848326s to wait for apiserver process to appear ...
	I0903 23:39:35.795285  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:35.795314  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.583193  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.583228  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.583252  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.685816  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.685847  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.796197  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.802478  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:38.802514  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.296152  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.304676  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.304709  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.795900  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.808669  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.808701  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:40.296345  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:40.301248  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:40.308506  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:40.308532  168184 api_server.go:131] duration metric: took 4.513239874s to wait for apiserver health ...
	I0903 23:39:40.308544  168184 cni.go:84] Creating CNI manager for ""
	I0903 23:39:40.308560  168184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:40.310257  168184 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 23:39:40.311411  168184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 23:39:40.324297  168184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 23:39:40.359191  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:40.365887  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:40.365935  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:40.365948  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:40.365960  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:40.365970  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:40.365979  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0903 23:39:40.365994  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:40.366002  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:40.366010  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:40.366018  168184 system_pods.go:74] duration metric: took 6.796748ms to wait for pod list to return data ...
	I0903 23:39:40.366035  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:40.370198  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:40.370234  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:40.370251  168184 node_conditions.go:105] duration metric: took 4.209293ms to run NodePressure ...
	I0903 23:39:40.370274  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:40.700552  168184 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707329  168184 kubeadm.go:735] kubelet initialised
	I0903 23:39:40.707359  168184 kubeadm.go:736] duration metric: took 6.769898ms waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707380  168184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:39:40.742387  168184 ops.go:34] apiserver oom_adj: -16
	I0903 23:39:40.742423  168184 kubeadm.go:593] duration metric: took 9.899238858s to restartPrimaryControlPlane
	I0903 23:39:40.742436  168184 kubeadm.go:394] duration metric: took 9.962706136s to StartCluster
	I0903 23:39:40.742460  168184 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.742582  168184 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:40.744274  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.744616  168184 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:39:40.744750  168184 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 23:39:40.744860  168184 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-088493"
	I0903 23:39:40.744868  168184 config.go:182] Loaded profile config "embed-certs-088493": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:40.744881  168184 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-088493"
	W0903 23:39:40.744893  168184 addons.go:247] addon storage-provisioner should already be in state true
	I0903 23:39:40.744922  168184 addons.go:69] Setting default-storageclass=true in profile "embed-certs-088493"
	I0903 23:39:40.744933  168184 addons.go:69] Setting metrics-server=true in profile "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons.go:238] Setting addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-088493"
	W0903 23:39:40.744954  168184 addons.go:247] addon metrics-server should already be in state true
	I0903 23:39:40.744973  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745459  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745485  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745506  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.745535  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.744924  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745779  168184 addons.go:69] Setting dashboard=true in profile "embed-certs-088493"
	I0903 23:39:40.745802  168184 addons.go:238] Setting addon dashboard=true in "embed-certs-088493"
	W0903 23:39:40.745830  168184 addons.go:247] addon dashboard should already be in state true
	I0903 23:39:40.745870  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.746262  168184 out.go:179] * Verifying Kubernetes components...
	I0903 23:39:40.746282  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746267  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746391  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.746425  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.747698  168184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:40.767429  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0903 23:39:40.767449  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0903 23:39:40.767992  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768030  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768589  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768620  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.768921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768944  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.769038  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.769266  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.769418  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.770014  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0903 23:39:40.770554  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.771097  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.771115  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.771582  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.772143  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.772190  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773072  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773117  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773482  168184 addons.go:238] Setting addon default-storageclass=true in "embed-certs-088493"
	W0903 23:39:40.773506  168184 addons.go:247] addon default-storageclass should already be in state true
	I0903 23:39:40.773541  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.773952  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773999  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.774960  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0903 23:39:40.775401  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.775921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.775942  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.776349  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.776900  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.776938  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.793573  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0903 23:39:40.794210  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.794795  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.794822  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.794889  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0903 23:39:40.795389  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.795443  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.795827  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.795843  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.796051  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.796242  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.796398  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.798691  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.799273  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.800606  168184 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0903 23:39:40.800622  168184 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0903 23:39:40.801751  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 23:39:40.801768  168184 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 23:39:40.801852  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.803035  168184 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0903 23:39:40.804238  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:40.804257  168184 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:40.804278  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.804408  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I0903 23:39:40.804948  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.806065  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.806185  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.806214  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.806622  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.807366  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.807410  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.807634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.807666  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808118  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808378  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.808540  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.808652  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.808753  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.813952  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.813983  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.814174  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.814360  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.815752  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.815909  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.824248  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0903 23:39:40.824946  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.825622  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.825648  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.826219  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.826431  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.828287  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0903 23:39:40.828447  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.828934  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.829313  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.829328  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.829707  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.829930  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.830176  168184 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:39:36.593552  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594179  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594207  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:36.594112  168561 retry.go:31] will retry after 1.356760931s: waiting for domain to come up
	I0903 23:39:37.952896  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953568  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953607  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:37.953473  168561 retry.go:31] will retry after 1.294359259s: waiting for domain to come up
	I0903 23:39:39.249609  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250217  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250262  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:39.250156  168561 retry.go:31] will retry after 1.639365303s: waiting for domain to come up
	I0903 23:39:40.891606  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892251  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892279  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:40.892154  168561 retry.go:31] will retry after 2.142708119s: waiting for domain to come up
	I0903 23:39:40.831548  168184 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:40.831567  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:40.831594  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.831860  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.833031  168184 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:40.833048  168184 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:40.833066  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.835589  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836095  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.836120  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836881  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.837063  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.837087  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.837348  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.838498  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.838667  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.838816  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.843815  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.844047  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.844370  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:41.113695  168184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:41.140527  168184 node_ready.go:35] waiting up to 6m0s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:41.252354  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:41.252385  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:41.306321  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:41.310664  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:41.310766  168184 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:41.341460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:41.341572  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:41.348238  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:41.399239  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.399275  168184 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:41.412810  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:41.412848  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:41.489435  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.538185  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:41.538223  168184 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:41.592563  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:41.592594  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:41.676605  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:41.676644  168184 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:41.728419  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:41.728455  168184 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:41.766195  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:41.766297  168184 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:41.819460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:41.819504  168184 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:41.870107  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:41.870149  168184 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:41.918698  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:42.966984  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660540637s)
	I0903 23:39:42.967054  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.618774457s)
	I0903 23:39:42.967081  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967098  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967101  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967114  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967189  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.477716601s)
	I0903 23:39:42.967236  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967261  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969478  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969480  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969503  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969506  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969513  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969523  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969513  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969546  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969559  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969588  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969601  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969611  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969628  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969708  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969726  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.971080  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971088  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971098  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971104  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971084  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971185  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971197  168184 addons.go:479] Verifying addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:42.971403  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971416  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.018871  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.018900  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.019306  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.019354  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.019366  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 23:39:43.162588  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:43.258660  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.339847622s)
	I0903 23:39:43.258727  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.258741  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259077  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259137  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.259145  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259162  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.259279  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259595  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259615  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259623  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.260848  168184 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-088493 addons enable metrics-server
	
	I0903 23:39:43.261929  168184 out.go:179] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0903 23:39:43.262942  168184 addons.go:514] duration metric: took 2.518204365s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0903 23:39:43.036707  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037307  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037341  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:43.037251  168561 retry.go:31] will retry after 2.378633942s: waiting for domain to come up
	I0903 23:39:45.418699  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419270  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419294  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:45.419170  168561 retry.go:31] will retry after 4.350956655s: waiting for domain to come up
	W0903 23:39:45.644356  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	W0903 23:39:47.702029  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:49.646957  168184 node_ready.go:49] node "embed-certs-088493" is "Ready"
	I0903 23:39:49.646992  168184 node_ready.go:38] duration metric: took 8.506385518s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:49.647010  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:49.647071  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:49.674344  168184 api_server.go:72] duration metric: took 8.92968556s to wait for apiserver process to appear ...
	I0903 23:39:49.674379  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:49.674406  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:49.683534  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:49.684659  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:49.684684  168184 api_server.go:131] duration metric: took 10.295954ms to wait for apiserver health ...
	I0903 23:39:49.684697  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:49.689273  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:49.689307  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.689322  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.689331  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.689343  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.689353  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.689371  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.689380  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.689416  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.689425  168184 system_pods.go:74] duration metric: took 4.720826ms to wait for pod list to return data ...
	I0903 23:39:49.689442  168184 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:49.693818  168184 default_sa.go:45] found service account: "default"
	I0903 23:39:49.693835  168184 default_sa.go:55] duration metric: took 4.384486ms for default service account to be created ...
	I0903 23:39:49.693843  168184 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:49.697438  168184 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:49.697471  168184 system_pods.go:89] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.697486  168184 system_pods.go:89] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.697493  168184 system_pods.go:89] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.697509  168184 system_pods.go:89] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.697519  168184 system_pods.go:89] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.697529  168184 system_pods.go:89] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.697543  168184 system_pods.go:89] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.697557  168184 system_pods.go:89] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.697572  168184 system_pods.go:126] duration metric: took 3.722231ms to wait for k8s-apps to be running ...
	I0903 23:39:49.697586  168184 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:49.697650  168184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:49.722443  168184 system_svc.go:56] duration metric: took 24.84315ms WaitForService to wait for kubelet
	I0903 23:39:49.722482  168184 kubeadm.go:578] duration metric: took 8.977829577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:49.722519  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:49.728053  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:49.728077  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:49.728088  168184 node_conditions.go:105] duration metric: took 5.564387ms to run NodePressure ...
	I0903 23:39:49.728101  168184 start.go:241] waiting for startup goroutines ...
	I0903 23:39:49.728110  168184 start.go:246] waiting for cluster config update ...
	I0903 23:39:49.728123  168184 start.go:255] writing updated cluster config ...
	I0903 23:39:49.728441  168184 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:49.735381  168184 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:49.742029  168184 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.750961  168184 pod_ready.go:94] pod "coredns-66bc5c9577-hg9bb" is "Ready"
	I0903 23:39:49.750990  168184 pod_ready.go:86] duration metric: took 8.940148ms for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.753806  168184 pod_ready.go:83] waiting for pod "etcd-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.772119  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) found domain IP: 192.168.39.63
	I0903 23:39:49.772661  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has current primary IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772672  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserving static IP address...
	I0903 23:39:49.773083  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.773114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | skip adding static IP to network mk-default-k8s-diff-port-799704 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"}
	I0903 23:39:49.773130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserved static IP address 192.168.39.63 for domain default-k8s-diff-port-799704
	I0903 23:39:49.773143  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for SSH...
	I0903 23:39:49.773158  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Getting to WaitForSSH function...
	I0903 23:39:49.775358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775784  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.775821  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH client type: external
	I0903 23:39:49.775969  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa (-rw-------)
	I0903 23:39:49.776034  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:39:49.776052  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | About to run SSH command:
	I0903 23:39:49.776061  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | exit 0
	I0903 23:39:49.901906  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | SSH cmd err, output: <nil>: 
	I0903 23:39:49.902261  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetConfigRaw
	I0903 23:39:49.902844  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:49.905187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905557  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.905588  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905853  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:49.906117  168525 machine.go:93] provisionDockerMachine start ...
	I0903 23:39:49.906164  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:49.906436  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:49.909118  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909485  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.909517  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909628  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:49.909805  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.909987  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.910151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:49.910306  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:49.910527  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:49.910537  168525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:39:50.014640  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:39:50.014669  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.014904  168525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799704"
	I0903 23:39:50.014929  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.015114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.018055  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018422  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.018472  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018636  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.018849  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019076  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019257  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.019426  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.019678  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.019694  168525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799704 && echo "default-k8s-diff-port-799704" | sudo tee /etc/hostname
	I0903 23:39:50.141537  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799704
	
	I0903 23:39:50.141574  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.144682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.145049  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145195  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.145418  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145562  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145700  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.145911  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.146180  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.146199  168525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799704/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:39:50.255397  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:39:50.255427  168525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:39:50.255451  168525 buildroot.go:174] setting up certificates
	I0903 23:39:50.255466  168525 provision.go:84] configureAuth start
	I0903 23:39:50.255483  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.255836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:50.259446  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.259884  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.259914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.260088  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.262682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263060  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.263100  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263203  168525 provision.go:143] copyHostCerts
	I0903 23:39:50.263281  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:39:50.263299  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:39:50.263354  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:39:50.263438  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:39:50.263446  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:39:50.263465  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:39:50.263519  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:39:50.263526  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:39:50.263542  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:39:50.263587  168525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799704 san=[127.0.0.1 192.168.39.63 default-k8s-diff-port-799704 localhost minikube]
	I0903 23:39:50.602313  168525 provision.go:177] copyRemoteCerts
	I0903 23:39:50.602368  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:39:50.602392  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.604930  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605268  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.605301  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605502  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.605701  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.605883  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.606030  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:50.692788  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:39:50.719278  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0903 23:39:50.746292  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0903 23:39:50.774559  168525 provision.go:87] duration metric: took 519.07244ms to configureAuth
	I0903 23:39:50.774589  168525 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:39:50.774798  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:50.774882  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.777459  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.777817  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.777847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.778019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.778203  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778379  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778490  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.778617  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.778835  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.778855  168525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:39:51.011695  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:39:51.011726  168525 machine.go:96] duration metric: took 1.105578172s to provisionDockerMachine
	I0903 23:39:51.011744  168525 start.go:293] postStartSetup for "default-k8s-diff-port-799704" (driver="kvm2")
	I0903 23:39:51.011757  168525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:39:51.011779  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.012153  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:39:51.012191  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.015053  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015411  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.015438  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015633  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.015847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.016003  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.016183  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.106391  168525 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:39:51.111268  168525 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:39:51.111302  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:39:51.111378  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:39:51.111475  168525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:39:51.111606  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:39:51.124981  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:39:51.157053  168525 start.go:296] duration metric: took 145.28983ms for postStartSetup
	I0903 23:39:51.157106  168525 fix.go:56] duration metric: took 19.734351982s for fixHost
	I0903 23:39:51.157130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.159836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160235  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.160300  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160437  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.160644  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.160820  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.161007  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.161249  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:51.161542  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:51.161568  168525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:39:51.267613  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942791.225994565
	
	I0903 23:39:51.267649  168525 fix.go:216] guest clock: 1756942791.225994565
	I0903 23:39:51.267659  168525 fix.go:229] Guest: 2025-09-03 23:39:51.225994565 +0000 UTC Remote: 2025-09-03 23:39:51.1571123 +0000 UTC m=+19.923532049 (delta=68.882265ms)
	I0903 23:39:51.267680  168525 fix.go:200] guest clock delta is within tolerance: 68.882265ms
	I0903 23:39:51.267685  168525 start.go:83] releasing machines lock for "default-k8s-diff-port-799704", held for 19.844953372s
	I0903 23:39:51.267705  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.267968  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:51.271046  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271416  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.271440  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271654  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272313  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272572  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272657  168525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:39:51.272709  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.272800  168525 ssh_runner.go:195] Run: cat /version.json
	I0903 23:39:51.272831  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.275925  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276389  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276409  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276565  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.276733  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.276885  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276908  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276918  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277054  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.277112  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.277187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.277335  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277486  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.379057  168525 ssh_runner.go:195] Run: systemctl --version
	I0903 23:39:51.384960  168525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:39:51.529307  168525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:39:51.537936  168525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:39:51.538011  168525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:39:51.558368  168525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:39:51.558394  168525 start.go:495] detecting cgroup driver to use...
	I0903 23:39:51.558466  168525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:39:51.578951  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:39:51.596694  168525 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:39:51.596752  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:39:51.613345  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:39:51.627714  168525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:39:51.771138  168525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:39:51.904861  168525 docker.go:234] disabling docker service ...
	I0903 23:39:51.904942  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:39:51.921699  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:39:51.935975  168525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:39:52.148548  168525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:39:52.296698  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:39:52.312273  168525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:39:52.336148  168525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:39:52.336224  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.348966  168525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:39:52.349044  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.362982  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.379362  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.391934  168525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:39:52.409486  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.422712  168525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.442694  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.454945  168525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:39:52.465176  168525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:39:52.465229  168525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:39:52.484711  168525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:39:52.497721  168525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:52.656667  168525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:39:52.772929  168525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:39:52.773004  168525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:39:52.778525  168525 start.go:563] Will wait 60s for crictl version
	I0903 23:39:52.778587  168525 ssh_runner.go:195] Run: which crictl
	I0903 23:39:52.782973  168525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:39:52.831724  168525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:39:52.831911  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.862674  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.892236  168525 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:39:53.350090  161984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:39:53.350225  161984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:39:53.352239  161984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:39:53.352325  161984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:39:53.352429  161984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:39:53.352559  161984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:39:53.352700  161984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:39:53.352785  161984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:39:53.353884  161984 out.go:252]   - Generating certificates and keys ...
	I0903 23:39:53.354002  161984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:39:53.354096  161984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:39:53.354204  161984 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:39:53.354294  161984 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:39:53.354408  161984 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:39:53.354488  161984 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:39:53.354571  161984 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:39:53.354691  161984 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:39:53.354803  161984 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:39:53.354908  161984 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:39:53.354963  161984 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:39:53.355043  161984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:39:53.355116  161984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:39:53.355189  161984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:39:53.355279  161984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:39:53.355378  161984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:39:53.355503  161984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:39:53.355595  161984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:39:53.355639  161984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:39:53.355708  161984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:39:53.356804  161984 out.go:252]   - Booting up control plane ...
	I0903 23:39:53.356945  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:39:53.357090  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:39:53.357200  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:39:53.357322  161984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:39:53.357557  161984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:39:53.357628  161984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:39:53.357717  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.357955  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358039  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358267  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358357  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358607  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358690  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358948  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359032  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.359346  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359365  161984 kubeadm.go:310] 
	I0903 23:39:53.359417  161984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:39:53.359470  161984 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:39:53.359476  161984 kubeadm.go:310] 
	I0903 23:39:53.359539  161984 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:39:53.359578  161984 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:39:53.359718  161984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:39:53.359727  161984 kubeadm.go:310] 
	I0903 23:39:53.359871  161984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:39:53.359916  161984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:39:53.359961  161984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:39:53.359968  161984 kubeadm.go:310] 
	I0903 23:39:53.360175  161984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:39:53.360307  161984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:39:53.360316  161984 kubeadm.go:310] 
	I0903 23:39:53.360461  161984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:39:53.360565  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:39:53.360667  161984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:39:53.360764  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:39:53.360841  161984 kubeadm.go:394] duration metric: took 3m57.809707974s to StartCluster
	I0903 23:39:53.360890  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:39:53.360954  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:39:53.361022  161984 kubeadm.go:310] 
	I0903 23:39:53.423382  161984 cri.go:89] found id: ""
	I0903 23:39:53.423411  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.423422  161984 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:39:53.423430  161984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:39:53.423488  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:39:53.479608  161984 cri.go:89] found id: ""
	I0903 23:39:53.479645  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.479659  161984 logs.go:284] No container was found matching "etcd"
	I0903 23:39:53.479667  161984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:39:53.479736  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:39:53.528071  161984 cri.go:89] found id: ""
	I0903 23:39:53.528107  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.528121  161984 logs.go:284] No container was found matching "coredns"
	I0903 23:39:53.528131  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:39:53.528202  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:39:53.573292  161984 cri.go:89] found id: ""
	I0903 23:39:53.573335  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.573348  161984 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:39:53.573361  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:39:53.573461  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:39:53.620296  161984 cri.go:89] found id: ""
	I0903 23:39:53.620326  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.620334  161984 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:39:53.620340  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:39:53.620395  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:39:53.671465  161984 cri.go:89] found id: ""
	I0903 23:39:53.671500  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.671512  161984 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:39:53.671521  161984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:39:53.671600  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:39:53.726259  161984 cri.go:89] found id: ""
	I0903 23:39:53.726297  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.726320  161984 logs.go:284] No container was found matching "kindnet"
	I0903 23:39:53.726335  161984 logs.go:123] Gathering logs for kubelet ...
	I0903 23:39:53.726350  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:39:53.803144  161984 logs.go:123] Gathering logs for dmesg ...
	I0903 23:39:53.803236  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:39:53.825585  161984 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:39:53.825628  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:39:53.938313  161984 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:39:53.938350  161984 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:39:53.938368  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:39:54.079732  161984 logs.go:123] Gathering logs for container status ...
	I0903 23:39:54.079785  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0903 23:39:54.144894  161984 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:39:54.144973  161984 out.go:285] * 
	W0903 23:39:54.145064  161984 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.145083  161984 out.go:285] * 
	W0903 23:39:54.147493  161984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:39:54.150778  161984 out.go:203] 
	W0903 23:39:54.151952  161984 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.152049  161984 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:39:54.152109  161984 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:39:54.153719  161984 out.go:203] 
	
	
	==> CRI-O <==
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.058604459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942795058584466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a12178cb-90ea-4e38-b117-f2e6e72e93c2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.059240325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9a129b1-c47a-4d0d-97b7-820330c83193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.059443002Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9a129b1-c47a-4d0d-97b7-820330c83193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.059867661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a9a129b1-c47a-4d0d-97b7-820330c83193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.113708348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08144207-f75e-4e97-836b-84d4acdddb15 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.113872212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08144207-f75e-4e97-836b-84d4acdddb15 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.118598528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=644b4532-68bd-4df6-97b1-f873546aca43 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.119312903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942795119278279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=644b4532-68bd-4df6-97b1-f873546aca43 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.120185716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e48b4ec-2bc0-467d-9342-f2f20091db88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.120283096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e48b4ec-2bc0-467d-9342-f2f20091db88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.120351568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0e48b4ec-2bc0-467d-9342-f2f20091db88 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.181493695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5a1305c-e5c9-4c90-b3b9-625587d98592 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.181731164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5a1305c-e5c9-4c90-b3b9-625587d98592 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.184162665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb35ca20-fcb7-4b68-8b31-ae83f3caae33 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.185105102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942795185077276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb35ca20-fcb7-4b68-8b31-ae83f3caae33 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.185808895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f845aa1b-db00-47b4-8a31-13bc27544eb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.185907295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f845aa1b-db00-47b4-8a31-13bc27544eb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.185961613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f845aa1b-db00-47b4-8a31-13bc27544eb2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.231357843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6257ddf8-47c2-4ef1-a49b-b07a078bf104 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.231502377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6257ddf8-47c2-4ef1-a49b-b07a078bf104 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.233504345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f1680eb-9db1-460d-a53d-84ef1fb65639 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.233964167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942795233941352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f1680eb-9db1-460d-a53d-84ef1fb65639 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.236034688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61c442a8-c3ef-4aea-aff2-481fec4869dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.236115459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61c442a8-c3ef-4aea-aff2-481fec4869dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:55 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:55.236153329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=61c442a8-c3ef-4aea-aff2-481fec4869dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.017584] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.215007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089265] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110682] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.144101] kauditd_printk_skb: 18 callbacks suppressed
	[Sep 3 23:36] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> kernel <==
	 23:39:55 up 4 min,  0 users,  load average: 0.02, 0.11, 0.06
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: goroutine 151 [select]:
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c77ec0, 0xc000cbe580, 0xc000ddee40, 0xc000ddede0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: created by net.(*netFD).connect
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: goroutine 152 [syscall]:
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: syscall.Syscall6(0xe8, 0xe, 0xc000a8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000a8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000ce8360, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000c1b130)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Sep 03 23:39:54 old-k8s-version-335468 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 03 23:39:54 old-k8s-version-335468 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 03 23:39:55 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 03 23:39:55 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.262199    2054 server.go:416] Version: v1.20.0
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.262843    2054 server.go:837] Client rotation is on, will bootstrap in background
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.266238    2054 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: W0903 23:39:55.268135    2054 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.268587    2054 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (521.031017ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:39:56.034772  168917 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (278.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (4.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-335468 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-335468 create -f testdata/busybox.yaml: exit status 1 (76.017825ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-335468" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-335468 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (339.662083ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:39:56.483963  168989 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status containerd --all --full --no-pager                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │                     │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat containerd --no-pager                                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /lib/systemd/system/containerd.service                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /etc/containerd/config.toml                                                                                                      │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo containerd config dump                                                                                                               │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status crio --all --full --no-pager                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat crio --no-pager                                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                              │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo crio config                                                                                                                          │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p enable-default-cni-380966                                                                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p disable-driver-mounts-005091                                                                                                                                        │ disable-driver-mounts-005091 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-434043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p no-preload-434043 --alsologtostderr -v=3                                                                                                                            │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:38 UTC │
	│ addons  │ enable metrics-server -p embed-certs-088493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                               │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p embed-certs-088493 --alsologtostderr -v=3                                                                                                                           │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-799704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                     │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p default-k8s-diff-port-799704 --alsologtostderr -v=3                                                                                                                 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p no-preload-434043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                           │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:38 UTC │
	│ start   │ -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                  │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-088493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                          │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-799704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:39:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:39:31.271818  168525 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:39:31.272050  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272058  168525 out.go:374] Setting ErrFile to fd 2...
	I0903 23:39:31.272062  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272279  168525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:39:31.272813  168525 out.go:368] Setting JSON to false
	I0903 23:39:31.273874  168525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8515,"bootTime":1756934256,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:39:31.273940  168525 start.go:140] virtualization: kvm guest
	I0903 23:39:31.275828  168525 out.go:179] * [default-k8s-diff-port-799704] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:39:31.277406  168525 notify.go:220] Checking for updates...
	I0903 23:39:31.278829  168525 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:39:31.280177  168525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:39:31.281537  168525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:31.282646  168525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:39:31.283774  168525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:39:31.284974  168525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:39:31.286724  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:31.287351  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.287440  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.308970  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0903 23:39:31.309860  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.310730  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.310751  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.311414  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.311676  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.311969  168525 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:39:31.312450  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.312503  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.333553  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0903 23:39:31.334226  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.334781  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.334799  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.335144  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.335265  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.388196  168525 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:39:31.389355  168525 start.go:304] selected driver: kvm2
	I0903 23:39:31.389381  168525 start.go:918] validating driver "kvm2" against &{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.389764  168525 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:39:31.391092  168525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.391304  168525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:39:31.418651  168525 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:39:31.419224  168525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.419280  168525 cni.go:84] Creating CNI manager for ""
	I0903 23:39:31.419338  168525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:31.419383  168525 start.go:348] cluster config:
	{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.419512  168525 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.421091  168525 out.go:179] * Starting "default-k8s-diff-port-799704" primary control-plane node in "default-k8s-diff-port-799704" cluster
	I0903 23:39:31.422103  168525 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:39:31.422147  168525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:39:31.422156  168525 cache.go:58] Caching tarball of preloaded images
	I0903 23:39:31.422278  168525 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:39:31.422293  168525 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:39:31.422425  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:31.422671  168525 start.go:360] acquireMachinesLock for default-k8s-diff-port-799704: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:39:31.422720  168525 start.go:364] duration metric: took 26.407µs to acquireMachinesLock for "default-k8s-diff-port-799704"
	I0903 23:39:31.422741  168525 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:39:31.422748  168525 fix.go:54] fixHost starting: 
	I0903 23:39:31.423078  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.423117  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.441527  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0903 23:39:31.442203  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.442786  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.442812  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.443215  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.443398  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.443541  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetState
	I0903 23:39:31.445456  168525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799704: state=Stopped err=<nil>
	I0903 23:39:31.445508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	W0903 23:39:31.449565  168525 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:39:30.924315  167951 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:30.924344  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:30.924364  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.925334  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:30.925362  167951 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:30.925405  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.928751  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.929980  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930221  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.930285  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930682  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.930861  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.931062  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.931098  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.931116  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.931175  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.932066  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.932251  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.932469  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.932671  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.933250  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.933904  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.933932  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.937721  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.938011  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.938313  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.938593  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.942958  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0903 23:39:30.943534  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.944030  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.944053  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.944469  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.945591  167951 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:30.949659  167951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:30.970235  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0903 23:39:30.970997  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.971694  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.971723  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.972120  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.972343  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetState
	I0903 23:39:30.974525  167951 main.go:141] libmachine: (no-preload-434043) Calling .DriverName
	I0903 23:39:30.974767  167951 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:30.974786  167951 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:30.974806  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.978640  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979150  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.979183  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979349  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.979545  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.979734  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.979898  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:31.130703  167951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:31.167066  167951 node_ready.go:35] waiting up to 6m0s for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174901  167951 node_ready.go:49] node "no-preload-434043" is "Ready"
	I0903 23:39:31.174933  167951 node_ready.go:38] duration metric: took 7.827583ms for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174948  167951 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:31.174996  167951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.209527  167951 api_server.go:72] duration metric: took 516.97608ms to wait for apiserver process to appear ...
	I0903 23:39:31.209554  167951 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:31.209577  167951 api_server.go:253] Checking apiserver healthz at https://192.168.72.145:8443/healthz ...
	I0903 23:39:31.218555  167951 api_server.go:279] https://192.168.72.145:8443/healthz returned 200:
	ok
	I0903 23:39:31.221061  167951 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:31.221085  167951 api_server.go:131] duration metric: took 11.521702ms to wait for apiserver health ...
	I0903 23:39:31.221095  167951 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:31.228196  167951 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:31.228233  167951 system_pods.go:61] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.228243  167951 system_pods.go:61] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.228253  167951 system_pods.go:61] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.228262  167951 system_pods.go:61] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.228268  167951 system_pods.go:61] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.228279  167951 system_pods.go:61] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.228287  167951 system_pods.go:61] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.228301  167951 system_pods.go:61] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.228313  167951 system_pods.go:74] duration metric: took 7.210776ms to wait for pod list to return data ...
	I0903 23:39:31.228326  167951 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:31.234005  167951 default_sa.go:45] found service account: "default"
	I0903 23:39:31.234030  167951 default_sa.go:55] duration metric: took 5.694551ms for default service account to be created ...
	I0903 23:39:31.234042  167951 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:31.239296  167951 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:31.239329  167951 system_pods.go:89] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.239340  167951 system_pods.go:89] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.239351  167951 system_pods.go:89] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.239362  167951 system_pods.go:89] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.239371  167951 system_pods.go:89] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.239384  167951 system_pods.go:89] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.239394  167951 system_pods.go:89] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.239405  167951 system_pods.go:89] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.239413  167951 system_pods.go:126] duration metric: took 5.365177ms to wait for k8s-apps to be running ...
	I0903 23:39:31.239425  167951 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:31.239473  167951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:31.292169  167951 system_svc.go:56] duration metric: took 52.735418ms WaitForService to wait for kubelet
	I0903 23:39:31.292202  167951 kubeadm.go:578] duration metric: took 599.654473ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.292225  167951 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:31.298898  167951 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:31.298922  167951 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:31.298936  167951 node_conditions.go:105] duration metric: took 6.70535ms to run NodePressure ...
	I0903 23:39:31.298952  167951 start.go:241] waiting for startup goroutines ...
	I0903 23:39:31.319927  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:31.319948  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:31.325067  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:31.325090  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:31.329142  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:31.347147  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:31.409588  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:31.409615  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:31.411804  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:31.411826  167951 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:31.497017  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.497047  167951 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:31.505080  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:31.505110  167951 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:31.564683  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.568463  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:31.568495  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:31.636504  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:31.636548  167951 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:31.712523  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:31.712560  167951 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:31.768671  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:31.768718  167951 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:31.852511  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:31.852556  167951 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:31.933535  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:31.933572  167951 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:32.030695  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:35.006492  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.659296879s)
	I0903 23:39:35.006576  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.006592  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.006963  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.006986  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.006998  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.007008  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.007538  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.007589  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.007620  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.010661  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.681478522s)
	I0903 23:39:35.010699  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.010709  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.011031  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.011053  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.011063  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.011072  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.012729  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.012763  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.012780  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.093772  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.529031226s)
	I0903 23:39:35.093830  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.093846  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.094207  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.094235  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.094246  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.094254  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098319  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.098337  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098358  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.098371  167951 addons.go:479] Verifying addon metrics-server=true in "no-preload-434043"
	I0903 23:39:35.098550  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.098568  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098881  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098898  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.294568  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.263818135s)
	I0903 23:39:35.294653  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.294676  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.295105  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.295130  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.295140  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.295149  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.297127  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.297151  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.297172  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.298897  167951 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-434043 addons enable metrics-server
	
	I0903 23:39:35.300309  167951 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0903 23:39:30.569160  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:39:30.585799  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590817  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590881  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.598100  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:39:30.611138  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:39:30.626975  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.631962  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.632013  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.639457  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:39:30.652349  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:39:30.669722  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676323  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676391  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.684739  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:39:30.698776  168184 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:39:30.705787  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:39:30.715596  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:39:30.723820  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:39:30.734268  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:39:30.751209  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:39:30.769986  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:39:30.779742  168184 kubeadm.go:392] StartCluster: {Name:embed-certs-088493 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:embed-certs-088493 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:30.779870  168184 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:39:30.779944  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.826700  168184 cri.go:89] found id: ""
	I0903 23:39:30.826791  168184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:39:30.843146  168184 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:39:30.843174  168184 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:39:30.843233  168184 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:39:30.856578  168184 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:39:30.857287  168184 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-088493" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:30.857752  168184 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-088493" cluster setting kubeconfig missing "embed-certs-088493" context setting]
	I0903 23:39:30.858340  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:30.859693  168184 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:39:30.872955  168184 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.143
	I0903 23:39:30.873001  168184 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:39:30.873018  168184 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:39:30.873080  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.937819  168184 cri.go:89] found id: ""
	I0903 23:39:30.937898  168184 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:39:30.970391  168184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:39:30.985618  168184 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:39:30.985641  168184 kubeadm.go:157] found existing configuration files:
	
	I0903 23:39:30.985702  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:39:30.997473  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:39:30.997551  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:39:31.011825  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:39:31.026448  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:39:31.026510  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:39:31.039622  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.051294  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:39:31.051360  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.065244  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:39:31.077889  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:39:31.077952  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:39:31.093981  168184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:39:31.108296  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:31.176874  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:32.823767  168184 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.646847779s)
	I0903 23:39:32.823806  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.102206  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.185673  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.256402  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:33.256504  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:33.757483  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.256629  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.756682  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.257560  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.451460  168525 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-799704" ...
	I0903 23:39:31.451487  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .Start
	I0903 23:39:31.451677  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) starting domain...
	I0903 23:39:31.451780  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) ensuring networks are active...
	I0903 23:39:31.452685  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network default is active
	I0903 23:39:31.453151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network mk-default-k8s-diff-port-799704 is active
	I0903 23:39:31.453750  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) getting domain XML...
	I0903 23:39:31.454639  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) creating domain...
	I0903 23:39:32.850704  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for IP...
	I0903 23:39:32.851600  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852214  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852359  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:32.852203  168561 retry.go:31] will retry after 194.562879ms: waiting for domain to come up
	I0903 23:39:33.049200  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049910  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049989  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.049872  168561 retry.go:31] will retry after 346.789216ms: waiting for domain to come up
	I0903 23:39:33.398907  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399505  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399547  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.399469  168561 retry.go:31] will retry after 396.68152ms: waiting for domain to come up
	I0903 23:39:33.798263  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799050  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799087  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.798998  168561 retry.go:31] will retry after 388.322823ms: waiting for domain to come up
	I0903 23:39:34.188660  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189376  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189482  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.189334  168561 retry.go:31] will retry after 742.14172ms: waiting for domain to come up
	I0903 23:39:34.932960  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933713  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.933579  168561 retry.go:31] will retry after 698.598056ms: waiting for domain to come up
	I0903 23:39:35.634753  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635481  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:35.635369  168561 retry.go:31] will retry after 956.852118ms: waiting for domain to come up
	I0903 23:39:35.301402  167951 addons.go:514] duration metric: took 4.608814093s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0903 23:39:35.301452  167951 start.go:246] waiting for cluster config update ...
	I0903 23:39:35.301470  167951 start.go:255] writing updated cluster config ...
	I0903 23:39:35.301784  167951 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:35.306947  167951 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:35.311995  167951 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.322196  167951 pod_ready.go:94] pod "coredns-66bc5c9577-z2s2p" is "Ready"
	I0903 23:39:35.322232  167951 pod_ready.go:86] duration metric: took 10.20611ms for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.327157  167951 pod_ready.go:83] waiting for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	W0903 23:39:37.336026  167951 pod_ready.go:104] pod "etcd-no-preload-434043" is not "Ready", error: <nil>
	I0903 23:39:38.836063  167951 pod_ready.go:94] pod "etcd-no-preload-434043" is "Ready"
	I0903 23:39:38.836099  167951 pod_ready.go:86] duration metric: took 3.508912099s for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.844005  167951 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.851465  167951 pod_ready.go:94] pod "kube-apiserver-no-preload-434043" is "Ready"
	I0903 23:39:38.851496  167951 pod_ready.go:86] duration metric: took 7.457768ms for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.853909  167951 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.859802  167951 pod_ready.go:94] pod "kube-controller-manager-no-preload-434043" is "Ready"
	I0903 23:39:38.859824  167951 pod_ready.go:86] duration metric: took 5.889234ms for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.863186  167951 pod_ready.go:83] waiting for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.113115  167951 pod_ready.go:94] pod "kube-proxy-lf7rz" is "Ready"
	I0903 23:39:39.113155  167951 pod_ready.go:86] duration metric: took 249.948168ms for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.315739  167951 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712333  167951 pod_ready.go:94] pod "kube-scheduler-no-preload-434043" is "Ready"
	I0903 23:39:39.712376  167951 pod_ready.go:86] duration metric: took 396.599596ms for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712391  167951 pod_ready.go:40] duration metric: took 4.405411155s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:39.778245  167951 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:39:39.779595  167951 out.go:179] * Done! kubectl is now configured to use "no-preload-434043" cluster and "default" namespace by default
	I0903 23:39:35.756635  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.795249  168184 api_server.go:72] duration metric: took 2.538848326s to wait for apiserver process to appear ...
	I0903 23:39:35.795285  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:35.795314  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.583193  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.583228  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.583252  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.685816  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.685847  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.796197  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.802478  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:38.802514  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.296152  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.304676  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.304709  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.795900  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.808669  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.808701  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:40.296345  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:40.301248  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:40.308506  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:40.308532  168184 api_server.go:131] duration metric: took 4.513239874s to wait for apiserver health ...
	I0903 23:39:40.308544  168184 cni.go:84] Creating CNI manager for ""
	I0903 23:39:40.308560  168184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:40.310257  168184 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 23:39:40.311411  168184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 23:39:40.324297  168184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 23:39:40.359191  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:40.365887  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:40.365935  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:40.365948  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:40.365960  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:40.365970  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:40.365979  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0903 23:39:40.365994  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:40.366002  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:40.366010  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:40.366018  168184 system_pods.go:74] duration metric: took 6.796748ms to wait for pod list to return data ...
	I0903 23:39:40.366035  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:40.370198  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:40.370234  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:40.370251  168184 node_conditions.go:105] duration metric: took 4.209293ms to run NodePressure ...
	I0903 23:39:40.370274  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:40.700552  168184 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707329  168184 kubeadm.go:735] kubelet initialised
	I0903 23:39:40.707359  168184 kubeadm.go:736] duration metric: took 6.769898ms waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707380  168184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:39:40.742387  168184 ops.go:34] apiserver oom_adj: -16
	I0903 23:39:40.742423  168184 kubeadm.go:593] duration metric: took 9.899238858s to restartPrimaryControlPlane
	I0903 23:39:40.742436  168184 kubeadm.go:394] duration metric: took 9.962706136s to StartCluster
	I0903 23:39:40.742460  168184 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.742582  168184 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:40.744274  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.744616  168184 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:39:40.744750  168184 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 23:39:40.744860  168184 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-088493"
	I0903 23:39:40.744868  168184 config.go:182] Loaded profile config "embed-certs-088493": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:40.744881  168184 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-088493"
	W0903 23:39:40.744893  168184 addons.go:247] addon storage-provisioner should already be in state true
	I0903 23:39:40.744922  168184 addons.go:69] Setting default-storageclass=true in profile "embed-certs-088493"
	I0903 23:39:40.744933  168184 addons.go:69] Setting metrics-server=true in profile "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons.go:238] Setting addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-088493"
	W0903 23:39:40.744954  168184 addons.go:247] addon metrics-server should already be in state true
	I0903 23:39:40.744973  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745459  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745485  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745506  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.745535  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.744924  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745779  168184 addons.go:69] Setting dashboard=true in profile "embed-certs-088493"
	I0903 23:39:40.745802  168184 addons.go:238] Setting addon dashboard=true in "embed-certs-088493"
	W0903 23:39:40.745830  168184 addons.go:247] addon dashboard should already be in state true
	I0903 23:39:40.745870  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.746262  168184 out.go:179] * Verifying Kubernetes components...
	I0903 23:39:40.746282  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746267  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746391  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.746425  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.747698  168184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:40.767429  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0903 23:39:40.767449  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0903 23:39:40.767992  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768030  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768589  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768620  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.768921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768944  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.769038  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.769266  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.769418  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.770014  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0903 23:39:40.770554  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.771097  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.771115  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.771582  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.772143  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.772190  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773072  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773117  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773482  168184 addons.go:238] Setting addon default-storageclass=true in "embed-certs-088493"
	W0903 23:39:40.773506  168184 addons.go:247] addon default-storageclass should already be in state true
	I0903 23:39:40.773541  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.773952  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773999  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.774960  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0903 23:39:40.775401  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.775921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.775942  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.776349  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.776900  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.776938  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.793573  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0903 23:39:40.794210  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.794795  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.794822  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.794889  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0903 23:39:40.795389  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.795443  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.795827  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.795843  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.796051  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.796242  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.796398  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.798691  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.799273  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.800606  168184 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0903 23:39:40.800622  168184 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0903 23:39:40.801751  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 23:39:40.801768  168184 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 23:39:40.801852  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.803035  168184 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0903 23:39:40.804238  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:40.804257  168184 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:40.804278  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.804408  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I0903 23:39:40.804948  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.806065  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.806185  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.806214  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.806622  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.807366  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.807410  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.807634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.807666  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808118  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808378  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.808540  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.808652  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.808753  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.813952  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.813983  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.814174  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.814360  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.815752  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.815909  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.824248  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0903 23:39:40.824946  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.825622  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.825648  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.826219  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.826431  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.828287  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0903 23:39:40.828447  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.828934  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.829313  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.829328  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.829707  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.829930  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.830176  168184 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:39:36.593552  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594179  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594207  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:36.594112  168561 retry.go:31] will retry after 1.356760931s: waiting for domain to come up
	I0903 23:39:37.952896  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953568  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953607  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:37.953473  168561 retry.go:31] will retry after 1.294359259s: waiting for domain to come up
	I0903 23:39:39.249609  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250217  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250262  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:39.250156  168561 retry.go:31] will retry after 1.639365303s: waiting for domain to come up
	I0903 23:39:40.891606  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892251  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892279  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:40.892154  168561 retry.go:31] will retry after 2.142708119s: waiting for domain to come up
	I0903 23:39:40.831548  168184 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:40.831567  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:40.831594  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.831860  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.833031  168184 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:40.833048  168184 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:40.833066  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.835589  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836095  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.836120  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836881  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.837063  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.837087  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.837348  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.838498  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.838667  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.838816  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.843815  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.844047  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.844370  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:41.113695  168184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:41.140527  168184 node_ready.go:35] waiting up to 6m0s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:41.252354  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:41.252385  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:41.306321  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:41.310664  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:41.310766  168184 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:41.341460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:41.341572  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:41.348238  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:41.399239  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.399275  168184 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:41.412810  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:41.412848  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:41.489435  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.538185  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:41.538223  168184 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:41.592563  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:41.592594  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:41.676605  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:41.676644  168184 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:41.728419  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:41.728455  168184 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:41.766195  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:41.766297  168184 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:41.819460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:41.819504  168184 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:41.870107  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:41.870149  168184 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:41.918698  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:42.966984  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660540637s)
	I0903 23:39:42.967054  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.618774457s)
	I0903 23:39:42.967081  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967098  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967101  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967114  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967189  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.477716601s)
	I0903 23:39:42.967236  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967261  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969478  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969480  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969503  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969506  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969513  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969523  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969513  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969546  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969559  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969588  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969601  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969611  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969628  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969708  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969726  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.971080  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971088  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971098  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971104  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971084  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971185  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971197  168184 addons.go:479] Verifying addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:42.971403  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971416  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.018871  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.018900  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.019306  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.019354  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.019366  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 23:39:43.162588  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:43.258660  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.339847622s)
	I0903 23:39:43.258727  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.258741  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259077  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259137  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.259145  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259162  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.259279  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259595  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259615  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259623  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.260848  168184 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-088493 addons enable metrics-server
	
	I0903 23:39:43.261929  168184 out.go:179] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0903 23:39:43.262942  168184 addons.go:514] duration metric: took 2.518204365s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0903 23:39:43.036707  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037307  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037341  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:43.037251  168561 retry.go:31] will retry after 2.378633942s: waiting for domain to come up
	I0903 23:39:45.418699  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419270  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419294  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:45.419170  168561 retry.go:31] will retry after 4.350956655s: waiting for domain to come up
	W0903 23:39:45.644356  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	W0903 23:39:47.702029  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:49.646957  168184 node_ready.go:49] node "embed-certs-088493" is "Ready"
	I0903 23:39:49.646992  168184 node_ready.go:38] duration metric: took 8.506385518s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:49.647010  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:49.647071  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:49.674344  168184 api_server.go:72] duration metric: took 8.92968556s to wait for apiserver process to appear ...
	I0903 23:39:49.674379  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:49.674406  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:49.683534  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:49.684659  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:49.684684  168184 api_server.go:131] duration metric: took 10.295954ms to wait for apiserver health ...
	I0903 23:39:49.684697  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:49.689273  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:49.689307  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.689322  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.689331  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.689343  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.689353  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.689371  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.689380  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.689416  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.689425  168184 system_pods.go:74] duration metric: took 4.720826ms to wait for pod list to return data ...
	I0903 23:39:49.689442  168184 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:49.693818  168184 default_sa.go:45] found service account: "default"
	I0903 23:39:49.693835  168184 default_sa.go:55] duration metric: took 4.384486ms for default service account to be created ...
	I0903 23:39:49.693843  168184 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:49.697438  168184 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:49.697471  168184 system_pods.go:89] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.697486  168184 system_pods.go:89] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.697493  168184 system_pods.go:89] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.697509  168184 system_pods.go:89] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.697519  168184 system_pods.go:89] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.697529  168184 system_pods.go:89] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.697543  168184 system_pods.go:89] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.697557  168184 system_pods.go:89] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.697572  168184 system_pods.go:126] duration metric: took 3.722231ms to wait for k8s-apps to be running ...
	I0903 23:39:49.697586  168184 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:49.697650  168184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:49.722443  168184 system_svc.go:56] duration metric: took 24.84315ms WaitForService to wait for kubelet
	I0903 23:39:49.722482  168184 kubeadm.go:578] duration metric: took 8.977829577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:49.722519  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:49.728053  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:49.728077  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:49.728088  168184 node_conditions.go:105] duration metric: took 5.564387ms to run NodePressure ...
	I0903 23:39:49.728101  168184 start.go:241] waiting for startup goroutines ...
	I0903 23:39:49.728110  168184 start.go:246] waiting for cluster config update ...
	I0903 23:39:49.728123  168184 start.go:255] writing updated cluster config ...
	I0903 23:39:49.728441  168184 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:49.735381  168184 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:49.742029  168184 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.750961  168184 pod_ready.go:94] pod "coredns-66bc5c9577-hg9bb" is "Ready"
	I0903 23:39:49.750990  168184 pod_ready.go:86] duration metric: took 8.940148ms for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.753806  168184 pod_ready.go:83] waiting for pod "etcd-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.772119  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) found domain IP: 192.168.39.63
	I0903 23:39:49.772661  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has current primary IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772672  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserving static IP address...
	I0903 23:39:49.773083  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.773114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | skip adding static IP to network mk-default-k8s-diff-port-799704 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"}
	I0903 23:39:49.773130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserved static IP address 192.168.39.63 for domain default-k8s-diff-port-799704
	I0903 23:39:49.773143  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for SSH...
	I0903 23:39:49.773158  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Getting to WaitForSSH function...
	I0903 23:39:49.775358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775784  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.775821  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH client type: external
	I0903 23:39:49.775969  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa (-rw-------)
	I0903 23:39:49.776034  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:39:49.776052  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | About to run SSH command:
	I0903 23:39:49.776061  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | exit 0
	I0903 23:39:49.901906  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | SSH cmd err, output: <nil>: 
	I0903 23:39:49.902261  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetConfigRaw
	I0903 23:39:49.902844  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:49.905187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905557  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.905588  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905853  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:49.906117  168525 machine.go:93] provisionDockerMachine start ...
	I0903 23:39:49.906164  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:49.906436  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:49.909118  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909485  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.909517  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909628  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:49.909805  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.909987  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.910151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:49.910306  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:49.910527  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:49.910537  168525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:39:50.014640  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:39:50.014669  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.014904  168525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799704"
	I0903 23:39:50.014929  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.015114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.018055  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018422  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.018472  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018636  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.018849  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019076  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019257  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.019426  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.019678  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.019694  168525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799704 && echo "default-k8s-diff-port-799704" | sudo tee /etc/hostname
	I0903 23:39:50.141537  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799704
	
	I0903 23:39:50.141574  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.144682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.145049  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145195  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.145418  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145562  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145700  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.145911  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.146180  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.146199  168525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799704/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:39:50.255397  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:39:50.255427  168525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:39:50.255451  168525 buildroot.go:174] setting up certificates
	I0903 23:39:50.255466  168525 provision.go:84] configureAuth start
	I0903 23:39:50.255483  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.255836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:50.259446  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.259884  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.259914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.260088  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.262682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263060  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.263100  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263203  168525 provision.go:143] copyHostCerts
	I0903 23:39:50.263281  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:39:50.263299  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:39:50.263354  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:39:50.263438  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:39:50.263446  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:39:50.263465  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:39:50.263519  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:39:50.263526  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:39:50.263542  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:39:50.263587  168525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799704 san=[127.0.0.1 192.168.39.63 default-k8s-diff-port-799704 localhost minikube]
	I0903 23:39:50.602313  168525 provision.go:177] copyRemoteCerts
	I0903 23:39:50.602368  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:39:50.602392  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.604930  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605268  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.605301  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605502  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.605701  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.605883  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.606030  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:50.692788  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:39:50.719278  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0903 23:39:50.746292  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0903 23:39:50.774559  168525 provision.go:87] duration metric: took 519.07244ms to configureAuth
	I0903 23:39:50.774589  168525 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:39:50.774798  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:50.774882  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.777459  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.777817  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.777847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.778019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.778203  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778379  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778490  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.778617  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.778835  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.778855  168525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:39:51.011695  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:39:51.011726  168525 machine.go:96] duration metric: took 1.105578172s to provisionDockerMachine
	I0903 23:39:51.011744  168525 start.go:293] postStartSetup for "default-k8s-diff-port-799704" (driver="kvm2")
	I0903 23:39:51.011757  168525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:39:51.011779  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.012153  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:39:51.012191  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.015053  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015411  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.015438  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015633  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.015847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.016003  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.016183  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.106391  168525 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:39:51.111268  168525 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:39:51.111302  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:39:51.111378  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:39:51.111475  168525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:39:51.111606  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:39:51.124981  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:39:51.157053  168525 start.go:296] duration metric: took 145.28983ms for postStartSetup
	I0903 23:39:51.157106  168525 fix.go:56] duration metric: took 19.734351982s for fixHost
	I0903 23:39:51.157130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.159836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160235  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.160300  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160437  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.160644  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.160820  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.161007  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.161249  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:51.161542  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:51.161568  168525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:39:51.267613  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942791.225994565
	
	I0903 23:39:51.267649  168525 fix.go:216] guest clock: 1756942791.225994565
	I0903 23:39:51.267659  168525 fix.go:229] Guest: 2025-09-03 23:39:51.225994565 +0000 UTC Remote: 2025-09-03 23:39:51.1571123 +0000 UTC m=+19.923532049 (delta=68.882265ms)
	I0903 23:39:51.267680  168525 fix.go:200] guest clock delta is within tolerance: 68.882265ms
	I0903 23:39:51.267685  168525 start.go:83] releasing machines lock for "default-k8s-diff-port-799704", held for 19.844953372s
	I0903 23:39:51.267705  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.267968  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:51.271046  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271416  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.271440  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271654  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272313  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272572  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272657  168525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:39:51.272709  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.272800  168525 ssh_runner.go:195] Run: cat /version.json
	I0903 23:39:51.272831  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.275925  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276389  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276409  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276565  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.276733  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.276885  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276908  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276918  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277054  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.277112  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.277187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.277335  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277486  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.379057  168525 ssh_runner.go:195] Run: systemctl --version
	I0903 23:39:51.384960  168525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:39:51.529307  168525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:39:51.537936  168525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:39:51.538011  168525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:39:51.558368  168525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:39:51.558394  168525 start.go:495] detecting cgroup driver to use...
	I0903 23:39:51.558466  168525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:39:51.578951  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:39:51.596694  168525 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:39:51.596752  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:39:51.613345  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:39:51.627714  168525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:39:51.771138  168525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:39:51.904861  168525 docker.go:234] disabling docker service ...
	I0903 23:39:51.904942  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:39:51.921699  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:39:51.935975  168525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:39:52.148548  168525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:39:52.296698  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:39:52.312273  168525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:39:52.336148  168525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:39:52.336224  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.348966  168525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:39:52.349044  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.362982  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.379362  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.391934  168525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:39:52.409486  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.422712  168525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.442694  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.454945  168525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:39:52.465176  168525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:39:52.465229  168525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:39:52.484711  168525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:39:52.497721  168525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:52.656667  168525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:39:52.772929  168525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:39:52.773004  168525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:39:52.778525  168525 start.go:563] Will wait 60s for crictl version
	I0903 23:39:52.778587  168525 ssh_runner.go:195] Run: which crictl
	I0903 23:39:52.782973  168525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:39:52.831724  168525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:39:52.831911  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.862674  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.892236  168525 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:39:53.350090  161984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:39:53.350225  161984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:39:53.352239  161984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:39:53.352325  161984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:39:53.352429  161984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:39:53.352559  161984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:39:53.352700  161984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:39:53.352785  161984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:39:53.353884  161984 out.go:252]   - Generating certificates and keys ...
	I0903 23:39:53.354002  161984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:39:53.354096  161984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:39:53.354204  161984 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:39:53.354294  161984 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:39:53.354408  161984 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:39:53.354488  161984 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:39:53.354571  161984 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:39:53.354691  161984 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:39:53.354803  161984 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:39:53.354908  161984 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:39:53.354963  161984 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:39:53.355043  161984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:39:53.355116  161984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:39:53.355189  161984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:39:53.355279  161984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:39:53.355378  161984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:39:53.355503  161984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:39:53.355595  161984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:39:53.355639  161984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:39:53.355708  161984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:39:53.356804  161984 out.go:252]   - Booting up control plane ...
	I0903 23:39:53.356945  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:39:53.357090  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:39:53.357200  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:39:53.357322  161984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:39:53.357557  161984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:39:53.357628  161984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:39:53.357717  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.357955  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358039  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358267  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358357  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358607  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358690  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358948  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359032  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.359346  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359365  161984 kubeadm.go:310] 
	I0903 23:39:53.359417  161984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:39:53.359470  161984 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:39:53.359476  161984 kubeadm.go:310] 
	I0903 23:39:53.359539  161984 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:39:53.359578  161984 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:39:53.359718  161984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:39:53.359727  161984 kubeadm.go:310] 
	I0903 23:39:53.359871  161984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:39:53.359916  161984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:39:53.359961  161984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:39:53.359968  161984 kubeadm.go:310] 
	I0903 23:39:53.360175  161984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:39:53.360307  161984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:39:53.360316  161984 kubeadm.go:310] 
	I0903 23:39:53.360461  161984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:39:53.360565  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:39:53.360667  161984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:39:53.360764  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:39:53.360841  161984 kubeadm.go:394] duration metric: took 3m57.809707974s to StartCluster
	I0903 23:39:53.360890  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:39:53.360954  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:39:53.361022  161984 kubeadm.go:310] 
	I0903 23:39:53.423382  161984 cri.go:89] found id: ""
	I0903 23:39:53.423411  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.423422  161984 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:39:53.423430  161984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:39:53.423488  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:39:53.479608  161984 cri.go:89] found id: ""
	I0903 23:39:53.479645  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.479659  161984 logs.go:284] No container was found matching "etcd"
	I0903 23:39:53.479667  161984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:39:53.479736  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:39:53.528071  161984 cri.go:89] found id: ""
	I0903 23:39:53.528107  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.528121  161984 logs.go:284] No container was found matching "coredns"
	I0903 23:39:53.528131  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:39:53.528202  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:39:53.573292  161984 cri.go:89] found id: ""
	I0903 23:39:53.573335  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.573348  161984 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:39:53.573361  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:39:53.573461  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:39:53.620296  161984 cri.go:89] found id: ""
	I0903 23:39:53.620326  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.620334  161984 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:39:53.620340  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:39:53.620395  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:39:53.671465  161984 cri.go:89] found id: ""
	I0903 23:39:53.671500  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.671512  161984 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:39:53.671521  161984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:39:53.671600  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:39:53.726259  161984 cri.go:89] found id: ""
	I0903 23:39:53.726297  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.726320  161984 logs.go:284] No container was found matching "kindnet"
	I0903 23:39:53.726335  161984 logs.go:123] Gathering logs for kubelet ...
	I0903 23:39:53.726350  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:39:53.803144  161984 logs.go:123] Gathering logs for dmesg ...
	I0903 23:39:53.803236  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:39:53.825585  161984 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:39:53.825628  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:39:53.938313  161984 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:39:53.938350  161984 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:39:53.938368  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:39:54.079732  161984 logs.go:123] Gathering logs for container status ...
	I0903 23:39:54.079785  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0903 23:39:54.144894  161984 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:39:54.144973  161984 out.go:285] * 
	W0903 23:39:54.145064  161984 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.145083  161984 out.go:285] * 
	W0903 23:39:54.147493  161984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:39:54.150778  161984 out.go:203] 
	W0903 23:39:54.151952  161984 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.152049  161984 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:39:54.152109  161984 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:39:54.153719  161984 out.go:203] 
	W0903 23:39:51.760171  168184 pod_ready.go:104] pod "etcd-embed-certs-088493" is not "Ready", error: <nil>
	W0903 23:39:53.762362  168184 pod_ready.go:104] pod "etcd-embed-certs-088493" is not "Ready", error: <nil>
	I0903 23:39:54.769147  168184 pod_ready.go:94] pod "etcd-embed-certs-088493" is "Ready"
	I0903 23:39:54.769179  168184 pod_ready.go:86] duration metric: took 5.015343926s for pod "etcd-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.776166  168184 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.799217  168184 pod_ready.go:94] pod "kube-apiserver-embed-certs-088493" is "Ready"
	I0903 23:39:54.799245  168184 pod_ready.go:86] duration metric: took 23.053755ms for pod "kube-apiserver-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.810330  168184 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.825639  168184 pod_ready.go:94] pod "kube-controller-manager-embed-certs-088493" is "Ready"
	I0903 23:39:54.825672  168184 pod_ready.go:86] duration metric: took 15.305332ms for pod "kube-controller-manager-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.829341  168184 pod_ready.go:83] waiting for pod "kube-proxy-pgtpd" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.961525  168184 pod_ready.go:94] pod "kube-proxy-pgtpd" is "Ready"
	I0903 23:39:54.961566  168184 pod_ready.go:86] duration metric: took 132.190496ms for pod "kube-proxy-pgtpd" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.159939  168184 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.567016  168184 pod_ready.go:94] pod "kube-scheduler-embed-certs-088493" is "Ready"
	I0903 23:39:55.567049  168184 pod_ready.go:86] duration metric: took 407.078157ms for pod "kube-scheduler-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.567065  168184 pod_ready.go:40] duration metric: took 5.831655811s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:55.649021  168184 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:39:55.650690  168184 out.go:179] * Done! kubectl is now configured to use "embed-certs-088493" cluster and "default" namespace by default
	I0903 23:39:52.893451  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:52.896582  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:52.896963  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:52.896985  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:52.897290  168525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:39:52.901553  168525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:39:52.915968  168525 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:39:52.916109  168525 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:39:52.916174  168525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:39:52.950990  168525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:39:52.951058  168525 ssh_runner.go:195] Run: which lz4
	I0903 23:39:52.955024  168525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:39:52.959339  168525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:39:52.959365  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:39:54.470294  168525 crio.go:462] duration metric: took 1.515293199s to copy over tarball
	I0903 23:39:54.470383  168525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Sep 03 23:39:56 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:56.975410014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942796975359976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=612d41c6-f4ce-4d9d-911e-bf50c758668b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:56 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:56.976281940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd8a4911-7d79-4a41-a556-1b7a2c39d10f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:56 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:56.976365876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd8a4911-7d79-4a41-a556-1b7a2c39d10f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:56 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:56.976427105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cd8a4911-7d79-4a41-a556-1b7a2c39d10f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.017289612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe528548-0bfd-4bd4-96bd-1c62380282e7 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.017407246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe528548-0bfd-4bd4-96bd-1c62380282e7 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.018853013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=394db456-598b-4eb5-b035-d57cea52c544 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.019439695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942797019409816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=394db456-598b-4eb5-b035-d57cea52c544 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.020076916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ac2914c-49e7-4497-b9af-b1b99fd5c723 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.020161910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ac2914c-49e7-4497-b9af-b1b99fd5c723 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.020211322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ac2914c-49e7-4497-b9af-b1b99fd5c723 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.066099161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddcc1307-35bf-4f32-95a4-b55f0c0e2460 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.066303391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddcc1307-35bf-4f32-95a4-b55f0c0e2460 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.067541022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2865bd7-4794-437e-81ef-a0cab81ddfa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.068152623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942797068125230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2865bd7-4794-437e-81ef-a0cab81ddfa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.068833893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82ebedfd-cac5-4d56-8785-2db2fb0b8299 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.068881343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82ebedfd-cac5-4d56-8785-2db2fb0b8299 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.068912902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=82ebedfd-cac5-4d56-8785-2db2fb0b8299 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.105699721Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94f80279-7335-4c2d-92ce-0262fbddd59f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.105857429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94f80279-7335-4c2d-92ce-0262fbddd59f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.107167958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa20cb18-7eb4-4935-a650-8e65078606ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.107802573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942797107724895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa20cb18-7eb4-4935-a650-8e65078606ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.108744614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a2633ba-a854-4297-a3b2-93e61dc59bd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.108951464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a2633ba-a854-4297-a3b2-93e61dc59bd4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:57 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:57.109030272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a2633ba-a854-4297-a3b2-93e61dc59bd4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.017584] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.215007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089265] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110682] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.144101] kauditd_printk_skb: 18 callbacks suppressed
	[Sep 3 23:36] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> kernel <==
	 23:39:57 up 4 min,  0 users,  load average: 0.02, 0.11, 0.06
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: goroutine 151 [select]:
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c77ec0, 0xc000cbe580, 0xc000ddee40, 0xc000ddede0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: created by net.(*netFD).connect
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: goroutine 152 [syscall]:
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: syscall.Syscall6(0xe8, 0xe, 0xc000a8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xe, 0xc000a8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000ce8360, 0x0, 0x0, 0x0)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000c1b130)
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Sep 03 23:39:54 old-k8s-version-335468 kubelet[1943]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Sep 03 23:39:54 old-k8s-version-335468 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 03 23:39:54 old-k8s-version-335468 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 03 23:39:55 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 03 23:39:55 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.262199    2054 server.go:416] Version: v1.20.0
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.262843    2054 server.go:837] Client rotation is on, will bootstrap in background
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.266238    2054 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: W0903 23:39:55.268135    2054 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 03 23:39:55 old-k8s-version-335468 kubelet[2054]: I0903 23:39:55.268587    2054 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (313.154588ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:39:57.680553  169043 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (287.741007ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:39:57.992135  169089 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
E0903 23:39:59.704893  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:59.835469  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25: (2.418635049s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status containerd --all --full --no-pager                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │                     │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat containerd --no-pager                                                                                                  │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /lib/systemd/system/containerd.service                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo cat /etc/containerd/config.toml                                                                                                      │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo containerd config dump                                                                                                               │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl status crio --all --full --no-pager                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo systemctl cat crio --no-pager                                                                                                        │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                              │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ ssh     │ -p enable-default-cni-380966 sudo crio config                                                                                                                          │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p enable-default-cni-380966                                                                                                                                           │ enable-default-cni-380966    │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ delete  │ -p disable-driver-mounts-005091                                                                                                                                        │ disable-driver-mounts-005091 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:36 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:36 UTC │ 03 Sep 25 23:37 UTC │
	│ addons  │ enable metrics-server -p no-preload-434043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p no-preload-434043 --alsologtostderr -v=3                                                                                                                            │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:38 UTC │
	│ addons  │ enable metrics-server -p embed-certs-088493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                               │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p embed-certs-088493 --alsologtostderr -v=3                                                                                                                           │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-799704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                     │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:37 UTC │
	│ stop    │ -p default-k8s-diff-port-799704 --alsologtostderr -v=3                                                                                                                 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:37 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p no-preload-434043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                           │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:38 UTC │
	│ start   │ -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                  │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:38 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-088493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                          │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0                   │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-799704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │ 03 Sep 25 23:39 UTC │
	│ start   │ -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:39 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:39:31
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:39:31.271818  168525 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:39:31.272050  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272058  168525 out.go:374] Setting ErrFile to fd 2...
	I0903 23:39:31.272062  168525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:39:31.272279  168525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:39:31.272813  168525 out.go:368] Setting JSON to false
	I0903 23:39:31.273874  168525 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8515,"bootTime":1756934256,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:39:31.273940  168525 start.go:140] virtualization: kvm guest
	I0903 23:39:31.275828  168525 out.go:179] * [default-k8s-diff-port-799704] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:39:31.277406  168525 notify.go:220] Checking for updates...
	I0903 23:39:31.278829  168525 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:39:31.280177  168525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:39:31.281537  168525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:31.282646  168525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:39:31.283774  168525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:39:31.284974  168525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:39:31.286724  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:31.287351  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.287440  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.308970  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0903 23:39:31.309860  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.310730  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.310751  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.311414  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.311676  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.311969  168525 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:39:31.312450  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.312503  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.333553  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0903 23:39:31.334226  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.334781  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.334799  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.335144  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.335265  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.388196  168525 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:39:31.389355  168525 start.go:304] selected driver: kvm2
	I0903 23:39:31.389381  168525 start.go:918] validating driver "kvm2" against &{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.389764  168525 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:39:31.391092  168525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.391304  168525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:39:31.418651  168525 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:39:31.419224  168525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.419280  168525 cni.go:84] Creating CNI manager for ""
	I0903 23:39:31.419338  168525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:31.419383  168525 start.go:348] cluster config:
	{Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:31.419512  168525 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:39:31.421091  168525 out.go:179] * Starting "default-k8s-diff-port-799704" primary control-plane node in "default-k8s-diff-port-799704" cluster
	I0903 23:39:31.422103  168525 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:39:31.422147  168525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:39:31.422156  168525 cache.go:58] Caching tarball of preloaded images
	I0903 23:39:31.422278  168525 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:39:31.422293  168525 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:39:31.422425  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:31.422671  168525 start.go:360] acquireMachinesLock for default-k8s-diff-port-799704: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:39:31.422720  168525 start.go:364] duration metric: took 26.407µs to acquireMachinesLock for "default-k8s-diff-port-799704"
	I0903 23:39:31.422741  168525 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:39:31.422748  168525 fix.go:54] fixHost starting: 
	I0903 23:39:31.423078  168525 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:31.423117  168525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:31.441527  168525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0903 23:39:31.442203  168525 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:31.442786  168525 main.go:141] libmachine: Using API Version  1
	I0903 23:39:31.442812  168525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:31.443215  168525 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:31.443398  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:31.443541  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetState
	I0903 23:39:31.445456  168525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799704: state=Stopped err=<nil>
	I0903 23:39:31.445508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	W0903 23:39:31.449565  168525 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:39:30.924315  167951 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:30.924344  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:30.924364  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.925334  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:30.925362  167951 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:30.925405  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.928751  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.929980  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930221  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.930285  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.930682  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.930861  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.931062  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.931098  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.931116  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.931175  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.932066  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.932251  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.932469  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.932671  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.933250  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.933904  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.933932  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.937721  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.938011  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.938313  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.938593  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:30.942958  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0903 23:39:30.943534  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.944030  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.944053  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.944469  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.945591  167951 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:30.949659  167951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:30.970235  167951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I0903 23:39:30.970997  167951 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:30.971694  167951 main.go:141] libmachine: Using API Version  1
	I0903 23:39:30.971723  167951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:30.972120  167951 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:30.972343  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetState
	I0903 23:39:30.974525  167951 main.go:141] libmachine: (no-preload-434043) Calling .DriverName
	I0903 23:39:30.974767  167951 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:30.974786  167951 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:30.974806  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHHostname
	I0903 23:39:30.978640  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979150  167951 main.go:141] libmachine: (no-preload-434043) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:6a:cb", ip: ""} in network mk-no-preload-434043: {Iface:virbr4 ExpiryTime:2025-09-04 00:38:55 +0000 UTC Type:0 Mac:52:54:00:18:6a:cb Iaid: IPaddr:192.168.72.145 Prefix:24 Hostname:no-preload-434043 Clientid:01:52:54:00:18:6a:cb}
	I0903 23:39:30.979183  167951 main.go:141] libmachine: (no-preload-434043) DBG | domain no-preload-434043 has defined IP address 192.168.72.145 and MAC address 52:54:00:18:6a:cb in network mk-no-preload-434043
	I0903 23:39:30.979349  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHPort
	I0903 23:39:30.979545  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHKeyPath
	I0903 23:39:30.979734  167951 main.go:141] libmachine: (no-preload-434043) Calling .GetSSHUsername
	I0903 23:39:30.979898  167951 sshutil.go:53] new ssh client: &{IP:192.168.72.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/no-preload-434043/id_rsa Username:docker}
	I0903 23:39:31.130703  167951 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:31.167066  167951 node_ready.go:35] waiting up to 6m0s for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174901  167951 node_ready.go:49] node "no-preload-434043" is "Ready"
	I0903 23:39:31.174933  167951 node_ready.go:38] duration metric: took 7.827583ms for node "no-preload-434043" to be "Ready" ...
	I0903 23:39:31.174948  167951 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:31.174996  167951 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.209527  167951 api_server.go:72] duration metric: took 516.97608ms to wait for apiserver process to appear ...
	I0903 23:39:31.209554  167951 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:31.209577  167951 api_server.go:253] Checking apiserver healthz at https://192.168.72.145:8443/healthz ...
	I0903 23:39:31.218555  167951 api_server.go:279] https://192.168.72.145:8443/healthz returned 200:
	ok
	I0903 23:39:31.221061  167951 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:31.221085  167951 api_server.go:131] duration metric: took 11.521702ms to wait for apiserver health ...
	I0903 23:39:31.221095  167951 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:31.228196  167951 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:31.228233  167951 system_pods.go:61] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.228243  167951 system_pods.go:61] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.228253  167951 system_pods.go:61] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.228262  167951 system_pods.go:61] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.228268  167951 system_pods.go:61] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.228279  167951 system_pods.go:61] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.228287  167951 system_pods.go:61] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.228301  167951 system_pods.go:61] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.228313  167951 system_pods.go:74] duration metric: took 7.210776ms to wait for pod list to return data ...
	I0903 23:39:31.228326  167951 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:31.234005  167951 default_sa.go:45] found service account: "default"
	I0903 23:39:31.234030  167951 default_sa.go:55] duration metric: took 5.694551ms for default service account to be created ...
	I0903 23:39:31.234042  167951 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:31.239296  167951 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:31.239329  167951 system_pods.go:89] "coredns-66bc5c9577-z2s2p" [d39823a0-08dc-474c-bf6b-40d74bb06086] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:31.239340  167951 system_pods.go:89] "etcd-no-preload-434043" [cb3bdc9b-2cc5-48bf-af81-e466291b15ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:31.239351  167951 system_pods.go:89] "kube-apiserver-no-preload-434043" [bbc48910-bfce-4152-a0d9-213fab7b0e9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:31.239362  167951 system_pods.go:89] "kube-controller-manager-no-preload-434043" [368d7eae-18f4-4a7c-9d38-5dba34a34a0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:31.239371  167951 system_pods.go:89] "kube-proxy-lf7rz" [d3a15894-b9c5-47b0-9486-4b2f0a646a66] Running
	I0903 23:39:31.239384  167951 system_pods.go:89] "kube-scheduler-no-preload-434043" [01f11d9a-a42b-47df-93f8-7a6d34f05eaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:31.239394  167951 system_pods.go:89] "metrics-server-746fcd58dc-qn2mm" [e256b1d8-cce6-4144-aa59-a9a030f99eb9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:31.239405  167951 system_pods.go:89] "storage-provisioner" [52149bb2-d696-46fd-a4e6-15ccafdebf02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:31.239413  167951 system_pods.go:126] duration metric: took 5.365177ms to wait for k8s-apps to be running ...
	I0903 23:39:31.239425  167951 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:31.239473  167951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:31.292169  167951 system_svc.go:56] duration metric: took 52.735418ms WaitForService to wait for kubelet
	I0903 23:39:31.292202  167951 kubeadm.go:578] duration metric: took 599.654473ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:31.292225  167951 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:31.298898  167951 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:31.298922  167951 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:31.298936  167951 node_conditions.go:105] duration metric: took 6.70535ms to run NodePressure ...
	I0903 23:39:31.298952  167951 start.go:241] waiting for startup goroutines ...
	I0903 23:39:31.319927  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:31.319948  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:31.325067  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:31.325090  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:31.329142  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:31.347147  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:31.409588  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:31.409615  167951 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:31.411804  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:31.411826  167951 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:31.497017  167951 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.497047  167951 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:31.505080  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:31.505110  167951 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:31.564683  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:31.568463  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:31.568495  167951 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:31.636504  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:31.636548  167951 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:31.712523  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:31.712560  167951 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:31.768671  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:31.768718  167951 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:31.852511  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:31.852556  167951 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:31.933535  167951 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:31.933572  167951 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:32.030695  167951 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:35.006492  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.659296879s)
	I0903 23:39:35.006576  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.006592  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.006963  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.006986  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.006998  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.007008  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.007538  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.007589  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.007620  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.010661  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.681478522s)
	I0903 23:39:35.010699  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.010709  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.011031  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.011053  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.011063  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.011072  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.012729  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.012763  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.012780  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.093772  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.529031226s)
	I0903 23:39:35.093830  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.093846  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.094207  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.094235  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.094246  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.094254  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098319  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.098337  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098358  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.098371  167951 addons.go:479] Verifying addon metrics-server=true in "no-preload-434043"
	I0903 23:39:35.098550  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.098568  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.098881  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.098898  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.294568  167951 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.263818135s)
	I0903 23:39:35.294653  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.294676  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.295105  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.295130  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.295140  167951 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:35.295149  167951 main.go:141] libmachine: (no-preload-434043) Calling .Close
	I0903 23:39:35.297127  167951 main.go:141] libmachine: (no-preload-434043) DBG | Closing plugin on server side
	I0903 23:39:35.297151  167951 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:35.297172  167951 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:35.298897  167951 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-434043 addons enable metrics-server
	
	I0903 23:39:35.300309  167951 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0903 23:39:30.569160  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:39:30.585799  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590817  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.590881  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:39:30.598100  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:39:30.611138  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:39:30.626975  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.631962  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.632013  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:39:30.639457  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:39:30.652349  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:39:30.669722  168184 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676323  168184 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.676391  168184 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:39:30.684739  168184 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:39:30.698776  168184 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:39:30.705787  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:39:30.715596  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:39:30.723820  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:39:30.734268  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:39:30.751209  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:39:30.769986  168184 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:39:30.779742  168184 kubeadm.go:392] StartCluster: {Name:embed-certs-088493 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:embed-certs-088493 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:39:30.779870  168184 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:39:30.779944  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.826700  168184 cri.go:89] found id: ""
	I0903 23:39:30.826791  168184 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:39:30.843146  168184 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:39:30.843174  168184 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:39:30.843233  168184 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:39:30.856578  168184 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:39:30.857287  168184 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-088493" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:30.857752  168184 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-088493" cluster setting kubeconfig missing "embed-certs-088493" context setting]
	I0903 23:39:30.858340  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:30.859693  168184 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:39:30.872955  168184 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.143
	I0903 23:39:30.873001  168184 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:39:30.873018  168184 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:39:30.873080  168184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:39:30.937819  168184 cri.go:89] found id: ""
	I0903 23:39:30.937898  168184 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:39:30.970391  168184 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:39:30.985618  168184 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:39:30.985641  168184 kubeadm.go:157] found existing configuration files:
	
	I0903 23:39:30.985702  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:39:30.997473  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:39:30.997551  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:39:31.011825  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:39:31.026448  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:39:31.026510  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:39:31.039622  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.051294  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:39:31.051360  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:39:31.065244  168184 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:39:31.077889  168184 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:39:31.077952  168184 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:39:31.093981  168184 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:39:31.108296  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:31.176874  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:32.823767  168184 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.646847779s)
	I0903 23:39:32.823806  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.102206  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.185673  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:33.256402  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:33.256504  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:33.757483  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.256629  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:34.756682  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.257560  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:31.451460  168525 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-799704" ...
	I0903 23:39:31.451487  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .Start
	I0903 23:39:31.451677  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) starting domain...
	I0903 23:39:31.451780  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) ensuring networks are active...
	I0903 23:39:31.452685  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network default is active
	I0903 23:39:31.453151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Ensuring network mk-default-k8s-diff-port-799704 is active
	I0903 23:39:31.453750  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) getting domain XML...
	I0903 23:39:31.454639  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) creating domain...
	I0903 23:39:32.850704  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for IP...
	I0903 23:39:32.851600  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852214  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:32.852359  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:32.852203  168561 retry.go:31] will retry after 194.562879ms: waiting for domain to come up
	I0903 23:39:33.049200  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049910  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.049989  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.049872  168561 retry.go:31] will retry after 346.789216ms: waiting for domain to come up
	I0903 23:39:33.398907  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399505  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.399547  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.399469  168561 retry.go:31] will retry after 396.68152ms: waiting for domain to come up
	I0903 23:39:33.798263  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799050  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:33.799087  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:33.798998  168561 retry.go:31] will retry after 388.322823ms: waiting for domain to come up
	I0903 23:39:34.188660  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189376  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.189482  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.189334  168561 retry.go:31] will retry after 742.14172ms: waiting for domain to come up
	I0903 23:39:34.932960  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:34.933713  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:34.933579  168561 retry.go:31] will retry after 698.598056ms: waiting for domain to come up
	I0903 23:39:35.634753  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635481  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:35.635508  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:35.635369  168561 retry.go:31] will retry after 956.852118ms: waiting for domain to come up
	I0903 23:39:35.301402  167951 addons.go:514] duration metric: took 4.608814093s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0903 23:39:35.301452  167951 start.go:246] waiting for cluster config update ...
	I0903 23:39:35.301470  167951 start.go:255] writing updated cluster config ...
	I0903 23:39:35.301784  167951 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:35.306947  167951 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:35.311995  167951 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.322196  167951 pod_ready.go:94] pod "coredns-66bc5c9577-z2s2p" is "Ready"
	I0903 23:39:35.322232  167951 pod_ready.go:86] duration metric: took 10.20611ms for pod "coredns-66bc5c9577-z2s2p" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:35.327157  167951 pod_ready.go:83] waiting for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	W0903 23:39:37.336026  167951 pod_ready.go:104] pod "etcd-no-preload-434043" is not "Ready", error: <nil>
	I0903 23:39:38.836063  167951 pod_ready.go:94] pod "etcd-no-preload-434043" is "Ready"
	I0903 23:39:38.836099  167951 pod_ready.go:86] duration metric: took 3.508912099s for pod "etcd-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.844005  167951 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.851465  167951 pod_ready.go:94] pod "kube-apiserver-no-preload-434043" is "Ready"
	I0903 23:39:38.851496  167951 pod_ready.go:86] duration metric: took 7.457768ms for pod "kube-apiserver-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.853909  167951 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.859802  167951 pod_ready.go:94] pod "kube-controller-manager-no-preload-434043" is "Ready"
	I0903 23:39:38.859824  167951 pod_ready.go:86] duration metric: took 5.889234ms for pod "kube-controller-manager-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:38.863186  167951 pod_ready.go:83] waiting for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.113115  167951 pod_ready.go:94] pod "kube-proxy-lf7rz" is "Ready"
	I0903 23:39:39.113155  167951 pod_ready.go:86] duration metric: took 249.948168ms for pod "kube-proxy-lf7rz" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.315739  167951 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712333  167951 pod_ready.go:94] pod "kube-scheduler-no-preload-434043" is "Ready"
	I0903 23:39:39.712376  167951 pod_ready.go:86] duration metric: took 396.599596ms for pod "kube-scheduler-no-preload-434043" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:39.712391  167951 pod_ready.go:40] duration metric: took 4.405411155s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:39.778245  167951 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:39:39.779595  167951 out.go:179] * Done! kubectl is now configured to use "no-preload-434043" cluster and "default" namespace by default
	I0903 23:39:35.756635  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:35.795249  168184 api_server.go:72] duration metric: took 2.538848326s to wait for apiserver process to appear ...
	I0903 23:39:35.795285  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:35.795314  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.583193  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.583228  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.583252  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.685816  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:39:38.685847  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:39:38.796197  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:38.802478  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:38.802514  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.296152  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.304676  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.304709  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:39.795900  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:39.808669  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:39:39.808701  168184 api_server.go:103] status: https://192.168.50.143:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:39:40.296345  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:40.301248  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:40.308506  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:40.308532  168184 api_server.go:131] duration metric: took 4.513239874s to wait for apiserver health ...
	I0903 23:39:40.308544  168184 cni.go:84] Creating CNI manager for ""
	I0903 23:39:40.308560  168184 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:39:40.310257  168184 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 23:39:40.311411  168184 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 23:39:40.324297  168184 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 23:39:40.359191  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:40.365887  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:40.365935  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:39:40.365948  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:40.365960  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:39:40.365970  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:40.365979  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0903 23:39:40.365994  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:40.366002  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:40.366010  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:40.366018  168184 system_pods.go:74] duration metric: took 6.796748ms to wait for pod list to return data ...
	I0903 23:39:40.366035  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:40.370198  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:40.370234  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:40.370251  168184 node_conditions.go:105] duration metric: took 4.209293ms to run NodePressure ...
	I0903 23:39:40.370274  168184 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:39:40.700552  168184 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707329  168184 kubeadm.go:735] kubelet initialised
	I0903 23:39:40.707359  168184 kubeadm.go:736] duration metric: took 6.769898ms waiting for restarted kubelet to initialise ...
	I0903 23:39:40.707380  168184 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:39:40.742387  168184 ops.go:34] apiserver oom_adj: -16
	I0903 23:39:40.742423  168184 kubeadm.go:593] duration metric: took 9.899238858s to restartPrimaryControlPlane
	I0903 23:39:40.742436  168184 kubeadm.go:394] duration metric: took 9.962706136s to StartCluster
	I0903 23:39:40.742460  168184 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.742582  168184 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:39:40.744274  168184 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:39:40.744616  168184 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:39:40.744750  168184 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 23:39:40.744860  168184 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-088493"
	I0903 23:39:40.744868  168184 config.go:182] Loaded profile config "embed-certs-088493": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:40.744881  168184 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-088493"
	W0903 23:39:40.744893  168184 addons.go:247] addon storage-provisioner should already be in state true
	I0903 23:39:40.744922  168184 addons.go:69] Setting default-storageclass=true in profile "embed-certs-088493"
	I0903 23:39:40.744933  168184 addons.go:69] Setting metrics-server=true in profile "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons.go:238] Setting addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:40.744944  168184 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-088493"
	W0903 23:39:40.744954  168184 addons.go:247] addon metrics-server should already be in state true
	I0903 23:39:40.744973  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745459  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745485  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.745506  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.745535  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.744924  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.745779  168184 addons.go:69] Setting dashboard=true in profile "embed-certs-088493"
	I0903 23:39:40.745802  168184 addons.go:238] Setting addon dashboard=true in "embed-certs-088493"
	W0903 23:39:40.745830  168184 addons.go:247] addon dashboard should already be in state true
	I0903 23:39:40.745870  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.746262  168184 out.go:179] * Verifying Kubernetes components...
	I0903 23:39:40.746282  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746267  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.746391  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.746425  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.747698  168184 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:40.767429  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0903 23:39:40.767449  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0903 23:39:40.767992  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768030  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.768589  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768620  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.768921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.768944  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.769038  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.769266  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.769418  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.770014  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0903 23:39:40.770554  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.771097  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.771115  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.771582  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.772143  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.772190  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773072  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773117  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.773482  168184 addons.go:238] Setting addon default-storageclass=true in "embed-certs-088493"
	W0903 23:39:40.773506  168184 addons.go:247] addon default-storageclass should already be in state true
	I0903 23:39:40.773541  168184 host.go:66] Checking if "embed-certs-088493" exists ...
	I0903 23:39:40.773952  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.773999  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.774960  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I0903 23:39:40.775401  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.775921  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.775942  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.776349  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.776900  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.776938  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.793573  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0903 23:39:40.794210  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.794795  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.794822  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.794889  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0903 23:39:40.795389  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.795443  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.795827  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.795843  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.796051  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.796242  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.796398  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.798691  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.799273  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.800606  168184 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0903 23:39:40.800622  168184 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0903 23:39:40.801751  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 23:39:40.801768  168184 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 23:39:40.801852  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.803035  168184 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0903 23:39:40.804238  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:39:40.804257  168184 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:39:40.804278  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.804408  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I0903 23:39:40.804948  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.806065  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.806185  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.806214  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.806622  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.807366  168184 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:39:40.807410  168184 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:39:40.807634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.807666  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808118  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.808378  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.808540  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.808652  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.808753  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.813952  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.813983  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.814174  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.814360  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.815752  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.815909  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.824248  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
	I0903 23:39:40.824946  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.825622  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.825648  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.826219  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.826431  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.828287  168184 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0903 23:39:40.828447  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.828934  168184 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:39:40.829313  168184 main.go:141] libmachine: Using API Version  1
	I0903 23:39:40.829328  168184 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:39:40.829707  168184 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:39:40.829930  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetState
	I0903 23:39:40.830176  168184 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:39:36.593552  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594179  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:36.594207  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:36.594112  168561 retry.go:31] will retry after 1.356760931s: waiting for domain to come up
	I0903 23:39:37.952896  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953568  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:37.953607  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:37.953473  168561 retry.go:31] will retry after 1.294359259s: waiting for domain to come up
	I0903 23:39:39.249609  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250217  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:39.250262  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:39.250156  168561 retry.go:31] will retry after 1.639365303s: waiting for domain to come up
	I0903 23:39:40.891606  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892251  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:40.892279  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:40.892154  168561 retry.go:31] will retry after 2.142708119s: waiting for domain to come up
	I0903 23:39:40.831548  168184 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:40.831567  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:39:40.831594  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.831860  168184 main.go:141] libmachine: (embed-certs-088493) Calling .DriverName
	I0903 23:39:40.833031  168184 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:40.833048  168184 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:39:40.833066  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHHostname
	I0903 23:39:40.835589  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836095  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.836120  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836634  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.836881  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.837063  168184 main.go:141] libmachine: (embed-certs-088493) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:bd:07", ip: ""} in network mk-embed-certs-088493: {Iface:virbr2 ExpiryTime:2025-09-04 00:39:17 +0000 UTC Type:0 Mac:52:54:00:49:bd:07 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:embed-certs-088493 Clientid:01:52:54:00:49:bd:07}
	I0903 23:39:40.837087  168184 main.go:141] libmachine: (embed-certs-088493) DBG | domain embed-certs-088493 has defined IP address 192.168.50.143 and MAC address 52:54:00:49:bd:07 in network mk-embed-certs-088493
	I0903 23:39:40.837348  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHPort
	I0903 23:39:40.838498  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.838667  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.838816  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:40.843815  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHKeyPath
	I0903 23:39:40.844047  168184 main.go:141] libmachine: (embed-certs-088493) Calling .GetSSHUsername
	I0903 23:39:40.844370  168184 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/embed-certs-088493/id_rsa Username:docker}
	I0903 23:39:41.113695  168184 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:39:41.140527  168184 node_ready.go:35] waiting up to 6m0s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:41.252354  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:39:41.252385  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:39:41.306321  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:39:41.310664  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:39:41.310766  168184 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:39:41.341460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:39:41.341572  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:39:41.348238  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:39:41.399239  168184 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.399275  168184 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:39:41.412810  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:39:41.412848  168184 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:39:41.489435  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:39:41.538185  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:39:41.538223  168184 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:39:41.592563  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:39:41.592594  168184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:39:41.676605  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:39:41.676644  168184 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:39:41.728419  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:39:41.728455  168184 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:39:41.766195  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:39:41.766297  168184 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:39:41.819460  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:39:41.819504  168184 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:39:41.870107  168184 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:41.870149  168184 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:39:41.918698  168184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:39:42.966984  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.660540637s)
	I0903 23:39:42.967054  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.618774457s)
	I0903 23:39:42.967081  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967098  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967101  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967114  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.967189  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.477716601s)
	I0903 23:39:42.967236  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.967261  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969478  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969480  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969503  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969506  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969513  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969523  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969513  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969546  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969559  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.969588  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.969601  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.969611  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969628  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.969708  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:42.969726  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:42.971080  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971088  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971098  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:42.971104  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971084  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971185  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:42.971197  168184 addons.go:479] Verifying addon metrics-server=true in "embed-certs-088493"
	I0903 23:39:42.971403  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:42.971416  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.018871  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.018900  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.019306  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.019354  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.019366  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	W0903 23:39:43.162588  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:43.258660  168184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.339847622s)
	I0903 23:39:43.258727  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.258741  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259077  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259137  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.259145  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259162  168184 main.go:141] libmachine: Making call to close driver server
	I0903 23:39:43.259279  168184 main.go:141] libmachine: (embed-certs-088493) Calling .Close
	I0903 23:39:43.259595  168184 main.go:141] libmachine: (embed-certs-088493) DBG | Closing plugin on server side
	I0903 23:39:43.259615  168184 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:39:43.259623  168184 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:39:43.260848  168184 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-088493 addons enable metrics-server
	
	I0903 23:39:43.261929  168184 out.go:179] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0903 23:39:43.262942  168184 addons.go:514] duration metric: took 2.518204365s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0903 23:39:43.036707  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037307  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:43.037341  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:43.037251  168561 retry.go:31] will retry after 2.378633942s: waiting for domain to come up
	I0903 23:39:45.418699  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419270  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | unable to find current IP address of domain default-k8s-diff-port-799704 in network mk-default-k8s-diff-port-799704
	I0903 23:39:45.419294  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | I0903 23:39:45.419170  168561 retry.go:31] will retry after 4.350956655s: waiting for domain to come up
	W0903 23:39:45.644356  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	W0903 23:39:47.702029  168184 node_ready.go:57] node "embed-certs-088493" has "Ready":"False" status (will retry)
	I0903 23:39:49.646957  168184 node_ready.go:49] node "embed-certs-088493" is "Ready"
	I0903 23:39:49.646992  168184 node_ready.go:38] duration metric: took 8.506385518s for node "embed-certs-088493" to be "Ready" ...
	I0903 23:39:49.647010  168184 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:39:49.647071  168184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:39:49.674344  168184 api_server.go:72] duration metric: took 8.92968556s to wait for apiserver process to appear ...
	I0903 23:39:49.674379  168184 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:39:49.674406  168184 api_server.go:253] Checking apiserver healthz at https://192.168.50.143:8443/healthz ...
	I0903 23:39:49.683534  168184 api_server.go:279] https://192.168.50.143:8443/healthz returned 200:
	ok
	I0903 23:39:49.684659  168184 api_server.go:141] control plane version: v1.34.0
	I0903 23:39:49.684684  168184 api_server.go:131] duration metric: took 10.295954ms to wait for apiserver health ...
	I0903 23:39:49.684697  168184 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:39:49.689273  168184 system_pods.go:59] 8 kube-system pods found
	I0903 23:39:49.689307  168184 system_pods.go:61] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.689322  168184 system_pods.go:61] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.689331  168184 system_pods.go:61] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.689343  168184 system_pods.go:61] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.689353  168184 system_pods.go:61] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.689371  168184 system_pods.go:61] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.689380  168184 system_pods.go:61] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.689416  168184 system_pods.go:61] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.689425  168184 system_pods.go:74] duration metric: took 4.720826ms to wait for pod list to return data ...
	I0903 23:39:49.689442  168184 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:39:49.693818  168184 default_sa.go:45] found service account: "default"
	I0903 23:39:49.693835  168184 default_sa.go:55] duration metric: took 4.384486ms for default service account to be created ...
	I0903 23:39:49.693843  168184 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:39:49.697438  168184 system_pods.go:86] 8 kube-system pods found
	I0903 23:39:49.697471  168184 system_pods.go:89] "coredns-66bc5c9577-hg9bb" [f8c43287-ec9a-48ad-b799-e5bb4b30b817] Running
	I0903 23:39:49.697486  168184 system_pods.go:89] "etcd-embed-certs-088493" [0917f6cc-6edc-4812-81c3-15f318021f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:39:49.697493  168184 system_pods.go:89] "kube-apiserver-embed-certs-088493" [5324d5b1-225a-4bab-8624-807c65f7737f] Running
	I0903 23:39:49.697509  168184 system_pods.go:89] "kube-controller-manager-embed-certs-088493" [c15fb12e-7f6b-4bfe-977c-97d35447e245] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:39:49.697519  168184 system_pods.go:89] "kube-proxy-pgtpd" [083b9318-0780-4c96-8991-7534443b6159] Running
	I0903 23:39:49.697529  168184 system_pods.go:89] "kube-scheduler-embed-certs-088493" [41c8bd25-dbb6-4d53-8642-d6f837c5c859] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:39:49.697543  168184 system_pods.go:89] "metrics-server-746fcd58dc-85qvg" [000bf568-f6a0-4621-899d-788283765155] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:39:49.697557  168184 system_pods.go:89] "storage-provisioner" [7c1d1800-66c1-42f5-87c5-675fd6610230] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:39:49.697572  168184 system_pods.go:126] duration metric: took 3.722231ms to wait for k8s-apps to be running ...
	I0903 23:39:49.697586  168184 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:39:49.697650  168184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:39:49.722443  168184 system_svc.go:56] duration metric: took 24.84315ms WaitForService to wait for kubelet
	I0903 23:39:49.722482  168184 kubeadm.go:578] duration metric: took 8.977829577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:39:49.722519  168184 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:39:49.728053  168184 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:39:49.728077  168184 node_conditions.go:123] node cpu capacity is 2
	I0903 23:39:49.728088  168184 node_conditions.go:105] duration metric: took 5.564387ms to run NodePressure ...
	I0903 23:39:49.728101  168184 start.go:241] waiting for startup goroutines ...
	I0903 23:39:49.728110  168184 start.go:246] waiting for cluster config update ...
	I0903 23:39:49.728123  168184 start.go:255] writing updated cluster config ...
	I0903 23:39:49.728441  168184 ssh_runner.go:195] Run: rm -f paused
	I0903 23:39:49.735381  168184 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:49.742029  168184 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.750961  168184 pod_ready.go:94] pod "coredns-66bc5c9577-hg9bb" is "Ready"
	I0903 23:39:49.750990  168184 pod_ready.go:86] duration metric: took 8.940148ms for pod "coredns-66bc5c9577-hg9bb" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.753806  168184 pod_ready.go:83] waiting for pod "etcd-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:49.772119  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772626  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) found domain IP: 192.168.39.63
	I0903 23:39:49.772661  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has current primary IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.772672  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserving static IP address...
	I0903 23:39:49.773083  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.773114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | skip adding static IP to network mk-default-k8s-diff-port-799704 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799704", mac: "52:54:00:a0:5b:2e", ip: "192.168.39.63"}
	I0903 23:39:49.773130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) reserved static IP address 192.168.39.63 for domain default-k8s-diff-port-799704
	I0903 23:39:49.773143  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) waiting for SSH...
	I0903 23:39:49.773158  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Getting to WaitForSSH function...
	I0903 23:39:49.775358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775784  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.775821  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.775914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH client type: external
	I0903 23:39:49.775969  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa (-rw-------)
	I0903 23:39:49.776034  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:39:49.776052  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | About to run SSH command:
	I0903 23:39:49.776061  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | exit 0
	I0903 23:39:49.901906  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | SSH cmd err, output: <nil>: 
	I0903 23:39:49.902261  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetConfigRaw
	I0903 23:39:49.902844  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:49.905187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905557  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.905588  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.905853  168525 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/config.json ...
	I0903 23:39:49.906117  168525 machine.go:93] provisionDockerMachine start ...
	I0903 23:39:49.906164  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:49.906436  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:49.909118  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909485  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:49.909517  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:49.909628  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:49.909805  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.909987  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:49.910151  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:49.910306  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:49.910527  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:49.910537  168525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:39:50.014640  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:39:50.014669  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.014904  168525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799704"
	I0903 23:39:50.014929  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.015114  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.018055  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018422  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.018472  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.018636  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.018849  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019076  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.019257  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.019426  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.019678  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.019694  168525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799704 && echo "default-k8s-diff-port-799704" | sudo tee /etc/hostname
	I0903 23:39:50.141537  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799704
	
	I0903 23:39:50.141574  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.144682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.145049  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.145195  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.145418  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145562  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.145700  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.145911  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.146180  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.146199  168525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799704/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:39:50.255397  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:39:50.255427  168525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:39:50.255451  168525 buildroot.go:174] setting up certificates
	I0903 23:39:50.255466  168525 provision.go:84] configureAuth start
	I0903 23:39:50.255483  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetMachineName
	I0903 23:39:50.255836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:50.259446  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.259884  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.259914  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.260088  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.262682  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263060  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.263100  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.263203  168525 provision.go:143] copyHostCerts
	I0903 23:39:50.263281  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:39:50.263299  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:39:50.263354  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:39:50.263438  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:39:50.263446  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:39:50.263465  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:39:50.263519  168525 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:39:50.263526  168525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:39:50.263542  168525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:39:50.263587  168525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799704 san=[127.0.0.1 192.168.39.63 default-k8s-diff-port-799704 localhost minikube]
	I0903 23:39:50.602313  168525 provision.go:177] copyRemoteCerts
	I0903 23:39:50.602368  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:39:50.602392  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.604930  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605268  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.605301  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.605502  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.605701  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.605883  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.606030  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:50.692788  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:39:50.719278  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0903 23:39:50.746292  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0903 23:39:50.774559  168525 provision.go:87] duration metric: took 519.07244ms to configureAuth
	I0903 23:39:50.774589  168525 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:39:50.774798  168525 config.go:182] Loaded profile config "default-k8s-diff-port-799704": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:39:50.774882  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:50.777459  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.777817  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:50.777847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:50.778019  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:50.778203  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778379  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:50.778490  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:50.778617  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:50.778835  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:50.778855  168525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:39:51.011695  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:39:51.011726  168525 machine.go:96] duration metric: took 1.105578172s to provisionDockerMachine
	I0903 23:39:51.011744  168525 start.go:293] postStartSetup for "default-k8s-diff-port-799704" (driver="kvm2")
	I0903 23:39:51.011757  168525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:39:51.011779  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.012153  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:39:51.012191  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.015053  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015411  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.015438  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.015633  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.015847  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.016003  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.016183  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.106391  168525 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:39:51.111268  168525 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:39:51.111302  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:39:51.111378  168525 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:39:51.111475  168525 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:39:51.111606  168525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:39:51.124981  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:39:51.157053  168525 start.go:296] duration metric: took 145.28983ms for postStartSetup
	I0903 23:39:51.157106  168525 fix.go:56] duration metric: took 19.734351982s for fixHost
	I0903 23:39:51.157130  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.159836  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160235  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.160300  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.160437  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.160644  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.160820  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.161007  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.161249  168525 main.go:141] libmachine: Using SSH client type: native
	I0903 23:39:51.161542  168525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0903 23:39:51.161568  168525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:39:51.267613  168525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942791.225994565
	
	I0903 23:39:51.267649  168525 fix.go:216] guest clock: 1756942791.225994565
	I0903 23:39:51.267659  168525 fix.go:229] Guest: 2025-09-03 23:39:51.225994565 +0000 UTC Remote: 2025-09-03 23:39:51.1571123 +0000 UTC m=+19.923532049 (delta=68.882265ms)
	I0903 23:39:51.267680  168525 fix.go:200] guest clock delta is within tolerance: 68.882265ms
	I0903 23:39:51.267685  168525 start.go:83] releasing machines lock for "default-k8s-diff-port-799704", held for 19.844953372s
	I0903 23:39:51.267705  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.267968  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:51.271046  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271416  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.271440  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.271654  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272313  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272572  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .DriverName
	I0903 23:39:51.272657  168525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:39:51.272709  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.272800  168525 ssh_runner.go:195] Run: cat /version.json
	I0903 23:39:51.272831  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHHostname
	I0903 23:39:51.275925  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276358  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276389  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276409  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276565  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.276733  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.276885  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:51.276908  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:51.276918  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277054  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHPort
	I0903 23:39:51.277112  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.277187  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHKeyPath
	I0903 23:39:51.277335  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetSSHUsername
	I0903 23:39:51.277486  168525 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/default-k8s-diff-port-799704/id_rsa Username:docker}
	I0903 23:39:51.379057  168525 ssh_runner.go:195] Run: systemctl --version
	I0903 23:39:51.384960  168525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:39:51.529307  168525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:39:51.537936  168525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:39:51.538011  168525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:39:51.558368  168525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:39:51.558394  168525 start.go:495] detecting cgroup driver to use...
	I0903 23:39:51.558466  168525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:39:51.578951  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:39:51.596694  168525 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:39:51.596752  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:39:51.613345  168525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:39:51.627714  168525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:39:51.771138  168525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:39:51.904861  168525 docker.go:234] disabling docker service ...
	I0903 23:39:51.904942  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:39:51.921699  168525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:39:51.935975  168525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:39:52.148548  168525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:39:52.296698  168525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:39:52.312273  168525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:39:52.336148  168525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:39:52.336224  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.348966  168525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:39:52.349044  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.362982  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.379362  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.391934  168525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:39:52.409486  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.422712  168525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.442694  168525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:39:52.454945  168525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:39:52.465176  168525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:39:52.465229  168525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:39:52.484711  168525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:39:52.497721  168525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:39:52.656667  168525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:39:52.772929  168525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:39:52.773004  168525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:39:52.778525  168525 start.go:563] Will wait 60s for crictl version
	I0903 23:39:52.778587  168525 ssh_runner.go:195] Run: which crictl
	I0903 23:39:52.782973  168525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:39:52.831724  168525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:39:52.831911  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.862674  168525 ssh_runner.go:195] Run: crio --version
	I0903 23:39:52.892236  168525 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:39:53.350090  161984 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:39:53.350225  161984 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:39:53.352239  161984 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:39:53.352325  161984 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:39:53.352429  161984 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:39:53.352559  161984 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:39:53.352700  161984 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:39:53.352785  161984 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:39:53.353884  161984 out.go:252]   - Generating certificates and keys ...
	I0903 23:39:53.354002  161984 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:39:53.354096  161984 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:39:53.354204  161984 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:39:53.354294  161984 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:39:53.354408  161984 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:39:53.354488  161984 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:39:53.354571  161984 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:39:53.354691  161984 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:39:53.354803  161984 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:39:53.354908  161984 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:39:53.354963  161984 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:39:53.355043  161984 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:39:53.355116  161984 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:39:53.355189  161984 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:39:53.355279  161984 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:39:53.355378  161984 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:39:53.355503  161984 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:39:53.355595  161984 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:39:53.355639  161984 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:39:53.355708  161984 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:39:53.356804  161984 out.go:252]   - Booting up control plane ...
	I0903 23:39:53.356945  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:39:53.357090  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:39:53.357200  161984 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:39:53.357322  161984 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:39:53.357557  161984 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:39:53.357628  161984 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:39:53.357717  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.357955  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358039  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358267  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358357  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358607  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.358690  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.358948  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359032  161984 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:39:53.359346  161984 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:39:53.359365  161984 kubeadm.go:310] 
	I0903 23:39:53.359417  161984 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:39:53.359470  161984 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:39:53.359476  161984 kubeadm.go:310] 
	I0903 23:39:53.359539  161984 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:39:53.359578  161984 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:39:53.359718  161984 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:39:53.359727  161984 kubeadm.go:310] 
	I0903 23:39:53.359871  161984 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:39:53.359916  161984 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:39:53.359961  161984 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:39:53.359968  161984 kubeadm.go:310] 
	I0903 23:39:53.360175  161984 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:39:53.360307  161984 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:39:53.360316  161984 kubeadm.go:310] 
	I0903 23:39:53.360461  161984 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:39:53.360565  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:39:53.360667  161984 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:39:53.360764  161984 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:39:53.360841  161984 kubeadm.go:394] duration metric: took 3m57.809707974s to StartCluster
	I0903 23:39:53.360890  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:39:53.360954  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:39:53.361022  161984 kubeadm.go:310] 
	I0903 23:39:53.423382  161984 cri.go:89] found id: ""
	I0903 23:39:53.423411  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.423422  161984 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:39:53.423430  161984 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:39:53.423488  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:39:53.479608  161984 cri.go:89] found id: ""
	I0903 23:39:53.479645  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.479659  161984 logs.go:284] No container was found matching "etcd"
	I0903 23:39:53.479667  161984 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:39:53.479736  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:39:53.528071  161984 cri.go:89] found id: ""
	I0903 23:39:53.528107  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.528121  161984 logs.go:284] No container was found matching "coredns"
	I0903 23:39:53.528131  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:39:53.528202  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:39:53.573292  161984 cri.go:89] found id: ""
	I0903 23:39:53.573335  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.573348  161984 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:39:53.573361  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:39:53.573461  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:39:53.620296  161984 cri.go:89] found id: ""
	I0903 23:39:53.620326  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.620334  161984 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:39:53.620340  161984 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:39:53.620395  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:39:53.671465  161984 cri.go:89] found id: ""
	I0903 23:39:53.671500  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.671512  161984 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:39:53.671521  161984 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:39:53.671600  161984 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:39:53.726259  161984 cri.go:89] found id: ""
	I0903 23:39:53.726297  161984 logs.go:282] 0 containers: []
	W0903 23:39:53.726320  161984 logs.go:284] No container was found matching "kindnet"
	I0903 23:39:53.726335  161984 logs.go:123] Gathering logs for kubelet ...
	I0903 23:39:53.726350  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:39:53.803144  161984 logs.go:123] Gathering logs for dmesg ...
	I0903 23:39:53.803236  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:39:53.825585  161984 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:39:53.825628  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:39:53.938313  161984 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:39:53.938350  161984 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:39:53.938368  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:39:54.079732  161984 logs.go:123] Gathering logs for container status ...
	I0903 23:39:54.079785  161984 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0903 23:39:54.144894  161984 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:39:54.144973  161984 out.go:285] * 
	W0903 23:39:54.145064  161984 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.145083  161984 out.go:285] * 
	W0903 23:39:54.147493  161984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:39:54.150778  161984 out.go:203] 
	W0903 23:39:54.151952  161984 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:39:54.152049  161984 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:39:54.152109  161984 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:39:54.153719  161984 out.go:203] 
	W0903 23:39:51.760171  168184 pod_ready.go:104] pod "etcd-embed-certs-088493" is not "Ready", error: <nil>
	W0903 23:39:53.762362  168184 pod_ready.go:104] pod "etcd-embed-certs-088493" is not "Ready", error: <nil>
	I0903 23:39:54.769147  168184 pod_ready.go:94] pod "etcd-embed-certs-088493" is "Ready"
	I0903 23:39:54.769179  168184 pod_ready.go:86] duration metric: took 5.015343926s for pod "etcd-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.776166  168184 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.799217  168184 pod_ready.go:94] pod "kube-apiserver-embed-certs-088493" is "Ready"
	I0903 23:39:54.799245  168184 pod_ready.go:86] duration metric: took 23.053755ms for pod "kube-apiserver-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.810330  168184 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.825639  168184 pod_ready.go:94] pod "kube-controller-manager-embed-certs-088493" is "Ready"
	I0903 23:39:54.825672  168184 pod_ready.go:86] duration metric: took 15.305332ms for pod "kube-controller-manager-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.829341  168184 pod_ready.go:83] waiting for pod "kube-proxy-pgtpd" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:54.961525  168184 pod_ready.go:94] pod "kube-proxy-pgtpd" is "Ready"
	I0903 23:39:54.961566  168184 pod_ready.go:86] duration metric: took 132.190496ms for pod "kube-proxy-pgtpd" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.159939  168184 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.567016  168184 pod_ready.go:94] pod "kube-scheduler-embed-certs-088493" is "Ready"
	I0903 23:39:55.567049  168184 pod_ready.go:86] duration metric: took 407.078157ms for pod "kube-scheduler-embed-certs-088493" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:39:55.567065  168184 pod_ready.go:40] duration metric: took 5.831655811s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:39:55.649021  168184 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:39:55.650690  168184 out.go:179] * Done! kubectl is now configured to use "embed-certs-088493" cluster and "default" namespace by default
	I0903 23:39:52.893451  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) Calling .GetIP
	I0903 23:39:52.896582  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:52.896963  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:5b:2e", ip: ""} in network mk-default-k8s-diff-port-799704: {Iface:virbr1 ExpiryTime:2025-09-04 00:39:43 +0000 UTC Type:0 Mac:52:54:00:a0:5b:2e Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:default-k8s-diff-port-799704 Clientid:01:52:54:00:a0:5b:2e}
	I0903 23:39:52.896985  168525 main.go:141] libmachine: (default-k8s-diff-port-799704) DBG | domain default-k8s-diff-port-799704 has defined IP address 192.168.39.63 and MAC address 52:54:00:a0:5b:2e in network mk-default-k8s-diff-port-799704
	I0903 23:39:52.897290  168525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0903 23:39:52.901553  168525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:39:52.915968  168525 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-799704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.0 ClusterName:default-k8s-diff-port-799704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.63 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:39:52.916109  168525 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:39:52.916174  168525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:39:52.950990  168525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:39:52.951058  168525 ssh_runner.go:195] Run: which lz4
	I0903 23:39:52.955024  168525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:39:52.959339  168525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:39:52.959365  168525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:39:54.470294  168525 crio.go:462] duration metric: took 1.515293199s to copy over tarball
	I0903 23:39:54.470383  168525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.949272219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942799949246296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f670780-8227-4d18-b43f-c452c224ebb2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.949842776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0d2cf0b-db6d-408f-b699-730590e6ed63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.949906858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0d2cf0b-db6d-408f-b699-730590e6ed63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.949940325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0d2cf0b-db6d-408f-b699-730590e6ed63 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.991897612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a930c4f-c3a3-4204-82a9-326b239ce79c name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.992118399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a930c4f-c3a3-4204-82a9-326b239ce79c name=/runtime.v1.RuntimeService/Version
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.993419662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa3f5815-fa23-43ba-a8b9-8691302daa58 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.994608664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942799994574106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa3f5815-fa23-43ba-a8b9-8691302daa58 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.995307693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24e0e8c7-5009-48a6-a74f-6cea53f3d80b name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.995465159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24e0e8c7-5009-48a6-a74f-6cea53f3d80b name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:39:59 old-k8s-version-335468 crio[824]: time="2025-09-03 23:39:59.995561036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=24e0e8c7-5009-48a6-a74f-6cea53f3d80b name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.034948967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff7db1ca-b47d-43d2-b26d-2b585b42cce5 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.035058320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff7db1ca-b47d-43d2-b26d-2b585b42cce5 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.036629599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6285e347-9b66-4992-a2da-499774eae6f2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.037288699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942800037257943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6285e347-9b66-4992-a2da-499774eae6f2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.038248518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06fe42df-ec56-4c11-b0e2-336134a2decd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.038327432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06fe42df-ec56-4c11-b0e2-336134a2decd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.038357582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=06fe42df-ec56-4c11-b0e2-336134a2decd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.075970754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6b1b414-3b24-46ec-80be-afbd5da15d5d name=/runtime.v1.RuntimeService/Version
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.076265181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6b1b414-3b24-46ec-80be-afbd5da15d5d name=/runtime.v1.RuntimeService/Version
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.077955276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2be9e89-c867-4f0a-86fb-a944af3f762b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.078360672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942800078339780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2be9e89-c867-4f0a-86fb-a944af3f762b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.079432703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cec1d39-3283-4b69-8165-a5a932b53865 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.079629316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cec1d39-3283-4b69-8165-a5a932b53865 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:40:00 old-k8s-version-335468 crio[824]: time="2025-09-03 23:40:00.079731961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7cec1d39-3283-4b69-8165-a5a932b53865 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.017584] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.215007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089265] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110682] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.144101] kauditd_printk_skb: 18 callbacks suppressed
	[Sep 3 23:36] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> kernel <==
	 23:40:00 up 4 min,  0 users,  load average: 0.02, 0.11, 0.06
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc000927e60, 0x4f7fe00, 0xc000122010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc00070e900, 0xc0000d8460, 0xc000c23e60, 0xc000c1de80, 0xc000c1f4ec, 0xc000c1de90, 0xc0006f4b40)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: goroutine 155 [select]:
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc00070ecc0, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc00070ecc0, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc00070ecc0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x24, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net.(*Dialer).DialContext(0xc000205a40, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000545cc0, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x24, 0x60, 0x7f1acffdb1f8, 0x118, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net/http.(*Transport).dial(0xc00092b400, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc0006a2f90, 0x24, 0x0, 0x0, 0x0, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net/http.(*Transport).dialConn(0xc00092b400, 0x4f7fe00, 0xc000122018, 0x0, 0xc0006f4c00, 0x5, 0xc0006a2f90, 0x24, 0x0, 0xc000c2bd40, ...)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: net/http.(*Transport).dialConnFor(0xc00092b400, 0xc000c52840)
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]: created by net/http.(*Transport).queueForDial
	Sep 03 23:40:00 old-k8s-version-335468 kubelet[2054]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (320.708353ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:40:00.806939  169218 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (4.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (112.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m51.189504603s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-335468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-335468 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-335468 describe deploy/metrics-server -n kube-system: exit status 1 (41.448017ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-335468" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-335468 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (247.999963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:41:52.291241  171706 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ pause   │ -p no-preload-434043 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-335468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │                     │
	│ unpause │ -p no-preload-434043 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ embed-certs-088493 image list --format=json                                                                                                                                                                                                 │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ default-k8s-diff-port-799704 image list --format=json                                                                                                                                                                                       │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-959437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ stop    │ -p newest-cni-959437 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-959437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ image   │ newest-cni-959437 image list --format=json                                                                                                                                                                                                  │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ pause   │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ unpause │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:41:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:41:07.431475  170958 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:41:07.431701  170958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:07.431710  170958 out.go:374] Setting ErrFile to fd 2...
	I0903 23:41:07.431714  170958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:07.431911  170958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:41:07.432469  170958 out.go:368] Setting JSON to false
	I0903 23:41:07.433351  170958 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8611,"bootTime":1756934256,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:41:07.433474  170958 start.go:140] virtualization: kvm guest
	I0903 23:41:07.435542  170958 out.go:179] * [newest-cni-959437] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:41:07.436798  170958 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:41:07.436840  170958 notify.go:220] Checking for updates...
	I0903 23:41:07.439083  170958 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:41:07.440232  170958 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:07.441464  170958 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:41:07.442685  170958 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:41:07.443652  170958 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:41:07.445288  170958 config.go:182] Loaded profile config "newest-cni-959437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:41:07.445720  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:07.445798  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:07.461362  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41257
	I0903 23:41:07.461820  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:07.462625  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:07.462683  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:07.463062  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:07.463353  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:07.463634  170958 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:41:07.463948  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:07.463992  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:07.479035  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0903 23:41:07.479534  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:07.480013  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:07.480042  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:07.480380  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:07.480601  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:07.516643  170958 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:41:07.517701  170958 start.go:304] selected driver: kvm2
	I0903 23:41:07.517722  170958 start.go:918] validating driver "kvm2" against &{Name:newest-cni-959437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:newest-cni-959437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:07.517869  170958 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:41:07.518860  170958 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:07.518976  170958 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:41:07.534709  170958 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:41:07.535255  170958 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0903 23:41:07.535304  170958 cni.go:84] Creating CNI manager for ""
	I0903 23:41:07.535353  170958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:07.535433  170958 start.go:348] cluster config:
	{Name:newest-cni-959437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-959437 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:07.535569  170958 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:07.538544  170958 out.go:179] * Starting "newest-cni-959437" primary control-plane node in "newest-cni-959437" cluster
	I0903 23:41:07.539974  170958 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:41:07.540020  170958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:41:07.540030  170958 cache.go:58] Caching tarball of preloaded images
	I0903 23:41:07.540135  170958 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:41:07.540148  170958 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:41:07.540298  170958 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/config.json ...
	I0903 23:41:07.540555  170958 start.go:360] acquireMachinesLock for newest-cni-959437: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:41:07.540619  170958 start.go:364] duration metric: took 36.045µs to acquireMachinesLock for "newest-cni-959437"
	I0903 23:41:07.540643  170958 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:41:07.540654  170958 fix.go:54] fixHost starting: 
	I0903 23:41:07.541047  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:07.541096  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:07.556176  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I0903 23:41:07.556637  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:07.557118  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:07.557152  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:07.557558  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:07.557764  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:07.557906  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:07.559613  170958 fix.go:112] recreateIfNeeded on newest-cni-959437: state=Stopped err=<nil>
	I0903 23:41:07.559660  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	W0903 23:41:07.559790  170958 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:41:07.561559  170958 out.go:252] * Restarting existing kvm2 VM for "newest-cni-959437" ...
	I0903 23:41:07.561586  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Start
	I0903 23:41:07.561714  170958 main.go:141] libmachine: (newest-cni-959437) starting domain...
	I0903 23:41:07.561747  170958 main.go:141] libmachine: (newest-cni-959437) ensuring networks are active...
	I0903 23:41:07.562640  170958 main.go:141] libmachine: (newest-cni-959437) Ensuring network default is active
	I0903 23:41:07.562979  170958 main.go:141] libmachine: (newest-cni-959437) Ensuring network mk-newest-cni-959437 is active
	I0903 23:41:07.563280  170958 main.go:141] libmachine: (newest-cni-959437) getting domain XML...
	I0903 23:41:07.564050  170958 main.go:141] libmachine: (newest-cni-959437) creating domain...
	I0903 23:41:08.790149  170958 main.go:141] libmachine: (newest-cni-959437) waiting for IP...
	I0903 23:41:08.791234  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:08.791728  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:08.791826  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:08.791728  170994 retry.go:31] will retry after 224.118989ms: waiting for domain to come up
	I0903 23:41:09.017237  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:09.017964  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:09.017993  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:09.017927  170994 retry.go:31] will retry after 248.55561ms: waiting for domain to come up
	I0903 23:41:09.268637  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:09.269201  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:09.269233  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:09.269171  170994 retry.go:31] will retry after 471.130742ms: waiting for domain to come up
	I0903 23:41:09.741786  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:09.742447  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:09.742471  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:09.742415  170994 retry.go:31] will retry after 536.807842ms: waiting for domain to come up
	I0903 23:41:10.281254  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:10.281832  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:10.281864  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:10.281790  170994 retry.go:31] will retry after 753.840261ms: waiting for domain to come up
	I0903 23:41:11.036844  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:11.037408  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:11.037441  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:11.037357  170994 retry.go:31] will retry after 715.626733ms: waiting for domain to come up
	I0903 23:41:11.754388  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:11.754912  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:11.754936  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:11.754873  170994 retry.go:31] will retry after 1.030054605s: waiting for domain to come up
	I0903 23:41:12.787149  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:12.787727  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:12.787759  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:12.787697  170994 retry.go:31] will retry after 1.029272159s: waiting for domain to come up
	I0903 23:41:13.818753  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:13.819346  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:13.819370  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:13.819285  170994 retry.go:31] will retry after 1.357827302s: waiting for domain to come up
	I0903 23:41:15.178345  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:15.178902  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:15.178925  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:15.178860  170994 retry.go:31] will retry after 1.760166534s: waiting for domain to come up
	I0903 23:41:16.941729  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:16.942299  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:16.942327  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:16.942240  170994 retry.go:31] will retry after 2.095501135s: waiting for domain to come up
	I0903 23:41:19.039010  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:19.039474  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:19.039502  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:19.039440  170994 retry.go:31] will retry after 3.376455352s: waiting for domain to come up
	I0903 23:41:22.420019  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:22.420470  170958 main.go:141] libmachine: (newest-cni-959437) DBG | unable to find current IP address of domain newest-cni-959437 in network mk-newest-cni-959437
	I0903 23:41:22.420491  170958 main.go:141] libmachine: (newest-cni-959437) DBG | I0903 23:41:22.420439  170994 retry.go:31] will retry after 3.177590646s: waiting for domain to come up
	I0903 23:41:25.601531  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.602059  170958 main.go:141] libmachine: (newest-cni-959437) found domain IP: 192.168.72.245
	I0903 23:41:25.602100  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has current primary IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.602117  170958 main.go:141] libmachine: (newest-cni-959437) reserving static IP address...
	I0903 23:41:25.602576  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "newest-cni-959437", mac: "52:54:00:7b:37:ed", ip: "192.168.72.245"} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.602605  170958 main.go:141] libmachine: (newest-cni-959437) DBG | skip adding static IP to network mk-newest-cni-959437 - found existing host DHCP lease matching {name: "newest-cni-959437", mac: "52:54:00:7b:37:ed", ip: "192.168.72.245"}
	I0903 23:41:25.602619  170958 main.go:141] libmachine: (newest-cni-959437) reserved static IP address 192.168.72.245 for domain newest-cni-959437
	I0903 23:41:25.602635  170958 main.go:141] libmachine: (newest-cni-959437) waiting for SSH...
	I0903 23:41:25.602651  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Getting to WaitForSSH function...
	I0903 23:41:25.604895  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.605241  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.605269  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.605430  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Using SSH client type: external
	I0903 23:41:25.605458  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa (-rw-------)
	I0903 23:41:25.605494  170958 main.go:141] libmachine: (newest-cni-959437) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:41:25.605512  170958 main.go:141] libmachine: (newest-cni-959437) DBG | About to run SSH command:
	I0903 23:41:25.605527  170958 main.go:141] libmachine: (newest-cni-959437) DBG | exit 0
	I0903 23:41:25.729406  170958 main.go:141] libmachine: (newest-cni-959437) DBG | SSH cmd err, output: <nil>: 
	I0903 23:41:25.729772  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetConfigRaw
	I0903 23:41:25.730373  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetIP
	I0903 23:41:25.732800  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.733142  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.733177  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.733415  170958 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/config.json ...
	I0903 23:41:25.733654  170958 machine.go:93] provisionDockerMachine start ...
	I0903 23:41:25.733676  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:25.733902  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:25.735886  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.736146  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.736171  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.736295  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:25.736450  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.736610  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.736735  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:25.736896  170958 main.go:141] libmachine: Using SSH client type: native
	I0903 23:41:25.737110  170958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0903 23:41:25.737121  170958 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:41:25.841707  170958 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:41:25.841739  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetMachineName
	I0903 23:41:25.842027  170958 buildroot.go:166] provisioning hostname "newest-cni-959437"
	I0903 23:41:25.842051  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetMachineName
	I0903 23:41:25.842272  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:25.844825  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.845158  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.845194  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.845307  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:25.845559  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.845775  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.845958  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:25.846140  170958 main.go:141] libmachine: Using SSH client type: native
	I0903 23:41:25.846411  170958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0903 23:41:25.846429  170958 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-959437 && echo "newest-cni-959437" | sudo tee /etc/hostname
	I0903 23:41:25.965649  170958 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-959437
	
	I0903 23:41:25.965681  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:25.968079  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.968550  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:25.968585  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:25.968814  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:25.969025  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.969185  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:25.969328  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:25.969504  170958 main.go:141] libmachine: Using SSH client type: native
	I0903 23:41:25.969709  170958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0903 23:41:25.969725  170958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-959437' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-959437/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-959437' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:41:26.083283  170958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:41:26.083318  170958 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:41:26.083343  170958 buildroot.go:174] setting up certificates
	I0903 23:41:26.083357  170958 provision.go:84] configureAuth start
	I0903 23:41:26.083370  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetMachineName
	I0903 23:41:26.083682  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetIP
	I0903 23:41:26.086476  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.086807  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.086837  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.086985  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.089248  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.089620  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.089654  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.089797  170958 provision.go:143] copyHostCerts
	I0903 23:41:26.089894  170958 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:41:26.089921  170958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:41:26.089986  170958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:41:26.090083  170958 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:41:26.090091  170958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:41:26.090117  170958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:41:26.090172  170958 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:41:26.090179  170958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:41:26.090199  170958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:41:26.090279  170958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.newest-cni-959437 san=[127.0.0.1 192.168.72.245 localhost minikube newest-cni-959437]
	I0903 23:41:26.230015  170958 provision.go:177] copyRemoteCerts
	I0903 23:41:26.230076  170958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:41:26.230103  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.232634  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.232933  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.232955  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.233167  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.233377  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.233558  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.233704  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:26.317154  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:41:26.344557  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0903 23:41:26.370707  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:41:26.396469  170958 provision.go:87] duration metric: took 313.095384ms to configureAuth
	I0903 23:41:26.396512  170958 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:41:26.396717  170958 config.go:182] Loaded profile config "newest-cni-959437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:41:26.396793  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.399716  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.400051  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.400135  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.400369  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.400586  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.400731  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.400849  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.401051  170958 main.go:141] libmachine: Using SSH client type: native
	I0903 23:41:26.401268  170958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0903 23:41:26.401284  170958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:41:26.635457  170958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:41:26.635487  170958 machine.go:96] duration metric: took 901.817246ms to provisionDockerMachine
	I0903 23:41:26.635498  170958 start.go:293] postStartSetup for "newest-cni-959437" (driver="kvm2")
	I0903 23:41:26.635508  170958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:41:26.635540  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:26.635873  170958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:41:26.635907  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.638575  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.638866  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.638890  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.639073  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.639264  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.639438  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.639557  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:26.721564  170958 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:41:26.726164  170958 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:41:26.726195  170958 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:41:26.726298  170958 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:41:26.726398  170958 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:41:26.726517  170958 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:41:26.737529  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:41:26.764264  170958 start.go:296] duration metric: took 128.749043ms for postStartSetup
	I0903 23:41:26.764310  170958 fix.go:56] duration metric: took 19.22365762s for fixHost
	I0903 23:41:26.764333  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.767050  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.767362  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.767387  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.767605  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.767845  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.768027  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.768163  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.768318  170958 main.go:141] libmachine: Using SSH client type: native
	I0903 23:41:26.768508  170958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.72.245 22 <nil> <nil>}
	I0903 23:41:26.768520  170958 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:41:26.870376  170958 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942886.831073268
	
	I0903 23:41:26.870403  170958 fix.go:216] guest clock: 1756942886.831073268
	I0903 23:41:26.870411  170958 fix.go:229] Guest: 2025-09-03 23:41:26.831073268 +0000 UTC Remote: 2025-09-03 23:41:26.764315047 +0000 UTC m=+19.370354123 (delta=66.758221ms)
	I0903 23:41:26.870440  170958 fix.go:200] guest clock delta is within tolerance: 66.758221ms
	I0903 23:41:26.870447  170958 start.go:83] releasing machines lock for "newest-cni-959437", held for 19.329814847s
	I0903 23:41:26.870469  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:26.870747  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetIP
	I0903 23:41:26.873428  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.873791  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.873818  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.873999  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:26.874459  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:26.874660  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:26.874748  170958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:41:26.874814  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.874881  170958 ssh_runner.go:195] Run: cat /version.json
	I0903 23:41:26.874908  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:26.877291  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.877624  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.877658  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.877682  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.877834  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.878022  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.878078  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:26.878109  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:26.878176  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.878259  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:26.878363  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:26.878435  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:26.878545  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:26.878684  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:26.993361  170958 ssh_runner.go:195] Run: systemctl --version
	I0903 23:41:26.999333  170958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:41:27.146845  170958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:41:27.153543  170958 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:41:27.153620  170958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:41:27.171424  170958 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:41:27.171453  170958 start.go:495] detecting cgroup driver to use...
	I0903 23:41:27.171540  170958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:41:27.189177  170958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:41:27.204598  170958 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:41:27.204664  170958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:41:27.219646  170958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:41:27.234345  170958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:41:27.373988  170958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:41:27.507255  170958 docker.go:234] disabling docker service ...
	I0903 23:41:27.507331  170958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:41:27.522724  170958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:41:27.536910  170958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:41:27.741090  170958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:41:27.872903  170958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:41:27.887951  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:41:27.908502  170958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:41:27.908572  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.919849  170958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:41:27.919927  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.931704  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.943046  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.954208  170958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:41:27.965539  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.976235  170958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:27.994484  170958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:41:28.005765  170958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:41:28.014990  170958 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:41:28.015068  170958 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:41:28.032649  170958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:41:28.042910  170958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:41:28.172071  170958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:41:28.277055  170958 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:41:28.277158  170958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:41:28.282827  170958 start.go:563] Will wait 60s for crictl version
	I0903 23:41:28.282908  170958 ssh_runner.go:195] Run: which crictl
	I0903 23:41:28.286624  170958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:41:28.326953  170958 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:41:28.327054  170958 ssh_runner.go:195] Run: crio --version
	I0903 23:41:28.353894  170958 ssh_runner.go:195] Run: crio --version
	I0903 23:41:28.382978  170958 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0903 23:41:28.384169  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetIP
	I0903 23:41:28.387010  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:28.387409  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:28.387441  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:28.387653  170958 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0903 23:41:28.391729  170958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:41:28.406492  170958 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0903 23:41:28.407499  170958 kubeadm.go:875] updating cluster {Name:newest-cni-959437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.0 ClusterName:newest-cni-959437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:41:28.407630  170958 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:41:28.407698  170958 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:41:28.443254  170958 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0903 23:41:28.443330  170958 ssh_runner.go:195] Run: which lz4
	I0903 23:41:28.447452  170958 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:41:28.451895  170958 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:41:28.451928  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0903 23:41:29.781529  170958 crio.go:462] duration metric: took 1.33411508s to copy over tarball
	I0903 23:41:29.781623  170958 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:41:31.421586  170958 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.639930308s)
	I0903 23:41:31.421615  170958 crio.go:469] duration metric: took 1.640054014s to extract the tarball
	I0903 23:41:31.421622  170958 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:41:31.461434  170958 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:41:31.503163  170958 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:41:31.503197  170958 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:41:31.503207  170958 kubeadm.go:926] updating node { 192.168.72.245 8443 v1.34.0 crio true true} ...
	I0903 23:41:31.503346  170958 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-959437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-959437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:41:31.503434  170958 ssh_runner.go:195] Run: crio config
	I0903 23:41:31.546594  170958 cni.go:84] Creating CNI manager for ""
	I0903 23:41:31.546624  170958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:31.546639  170958 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0903 23:41:31.546673  170958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.245 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-959437 NodeName:newest-cni-959437 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:41:31.546809  170958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-959437"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.245"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.245"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:41:31.546882  170958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:41:31.558408  170958 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:41:31.558484  170958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:41:31.569426  170958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0903 23:41:31.587621  170958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:41:31.605871  170958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0903 23:41:31.624012  170958 ssh_runner.go:195] Run: grep 192.168.72.245	control-plane.minikube.internal$ /etc/hosts
	I0903 23:41:31.627736  170958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:41:31.640176  170958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:41:31.769040  170958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:41:31.788408  170958 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437 for IP: 192.168.72.245
	I0903 23:41:31.788431  170958 certs.go:194] generating shared ca certs ...
	I0903 23:41:31.788453  170958 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:41:31.788630  170958 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:41:31.788687  170958 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:41:31.788705  170958 certs.go:256] generating profile certs ...
	I0903 23:41:31.788798  170958 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/client.key
	I0903 23:41:31.788875  170958 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/apiserver.key.975b700b
	I0903 23:41:31.788928  170958 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/proxy-client.key
	I0903 23:41:31.789054  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:41:31.789096  170958 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:41:31.789114  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:41:31.789148  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:41:31.789287  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:41:31.789365  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:41:31.789454  170958 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:41:31.790024  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:41:31.827923  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:41:31.857753  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:41:31.886761  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:41:31.912941  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0903 23:41:31.939015  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:41:31.965149  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:41:31.991057  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/newest-cni-959437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:41:32.016919  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:41:32.041973  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:41:32.067192  170958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:41:32.094532  170958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:41:32.113769  170958 ssh_runner.go:195] Run: openssl version
	I0903 23:41:32.119991  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:41:32.131920  170958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:41:32.136677  170958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:41:32.136750  170958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:41:32.143397  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:41:32.154964  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:41:32.166796  170958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:41:32.171513  170958 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:41:32.171579  170958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:41:32.178057  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:41:32.190092  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:41:32.201896  170958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:41:32.206555  170958 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:41:32.206614  170958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:41:32.213442  170958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:41:32.225262  170958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:41:32.229951  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:41:32.236662  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:41:32.243450  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:41:32.250419  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:41:32.257078  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:41:32.263800  170958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:41:32.270205  170958 kubeadm.go:392] StartCluster: {Name:newest-cni-959437 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.0 ClusterName:newest-cni-959437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:32.270279  170958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:41:32.270320  170958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:41:32.305864  170958 cri.go:89] found id: ""
	I0903 23:41:32.305929  170958 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:41:32.317244  170958 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:41:32.317271  170958 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:41:32.317327  170958 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:41:32.328132  170958 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:41:32.328706  170958 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-959437" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:32.328858  170958 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-959437" cluster setting kubeconfig missing "newest-cni-959437" context setting]
	I0903 23:41:32.329213  170958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:41:32.330708  170958 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:41:32.340740  170958 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.245
	I0903 23:41:32.340778  170958 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:41:32.340791  170958 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:41:32.340839  170958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:41:32.380156  170958 cri.go:89] found id: ""
	I0903 23:41:32.380232  170958 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:41:32.397371  170958 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:41:32.408261  170958 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:41:32.408283  170958 kubeadm.go:157] found existing configuration files:
	
	I0903 23:41:32.408336  170958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:41:32.418443  170958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:41:32.418509  170958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:41:32.428610  170958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:41:32.438833  170958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:41:32.438893  170958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:41:32.449046  170958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:41:32.458774  170958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:41:32.458833  170958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:41:32.468995  170958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:41:32.478928  170958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:41:32.478981  170958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:41:32.489124  170958 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:41:32.499653  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:32.550803  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:33.647854  170958 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09700886s)
	I0903 23:41:33.647895  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:33.888788  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:33.948978  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:34.024717  170958 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:41:34.024811  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:34.524888  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:35.025425  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:35.525306  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:36.025239  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:36.525864  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:36.551486  170958 api_server.go:72] duration metric: took 2.526766369s to wait for apiserver process to appear ...
	I0903 23:41:36.551515  170958 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:41:36.551538  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:39.181126  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:41:39.181159  170958 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:41:39.181177  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:39.203529  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0903 23:41:39.203561  170958 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0903 23:41:39.552078  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:39.558796  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:41:39.558822  170958 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:41:40.052581  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:40.062342  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:41:40.062371  170958 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:41:40.551871  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:40.559356  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0903 23:41:40.559382  170958 api_server.go:103] status: https://192.168.72.245:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0903 23:41:41.052009  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:41.056727  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0903 23:41:41.063861  170958 api_server.go:141] control plane version: v1.34.0
	I0903 23:41:41.063891  170958 api_server.go:131] duration metric: took 4.512369605s to wait for apiserver health ...
	I0903 23:41:41.063901  170958 cni.go:84] Creating CNI manager for ""
	I0903 23:41:41.063907  170958 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:41.065617  170958 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0903 23:41:41.066785  170958 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0903 23:41:41.080638  170958 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0903 23:41:41.099806  170958 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:41:41.105102  170958 system_pods.go:59] 8 kube-system pods found
	I0903 23:41:41.105149  170958 system_pods.go:61] "coredns-66bc5c9577-pdqqg" [c9313065-4a7b-40c1-ba40-407e7ddb98f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:41:41.105158  170958 system_pods.go:61] "etcd-newest-cni-959437" [79d67b76-709f-4013-a77a-744dd8281e7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:41:41.105166  170958 system_pods.go:61] "kube-apiserver-newest-cni-959437" [c2bc9862-6e8e-484c-9bac-3adf1dc90b3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:41:41.105177  170958 system_pods.go:61] "kube-controller-manager-newest-cni-959437" [42d5d8bf-3440-44d1-89f2-42c5edf652e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:41:41.105183  170958 system_pods.go:61] "kube-proxy-mhlbb" [e1083d3c-ec99-45cd-ab69-34fc23197e1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0903 23:41:41.105188  170958 system_pods.go:61] "kube-scheduler-newest-cni-959437" [faf7ae74-6306-4d5d-a67a-2ff07286a7ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:41:41.105194  170958 system_pods.go:61] "metrics-server-746fcd58dc-x5lzt" [a60ee5d1-9505-4d20-87fc-606a4f7b63ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:41:41.105199  170958 system_pods.go:61] "storage-provisioner" [17fe655c-ef6c-40f2-a9cc-dd0f56d316c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:41:41.105204  170958 system_pods.go:74] duration metric: took 5.376326ms to wait for pod list to return data ...
	I0903 23:41:41.105213  170958 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:41:41.107649  170958 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:41:41.107674  170958 node_conditions.go:123] node cpu capacity is 2
	I0903 23:41:41.107686  170958 node_conditions.go:105] duration metric: took 2.468995ms to run NodePressure ...
	I0903 23:41:41.107716  170958 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:41:41.414522  170958 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:41:41.432061  170958 ops.go:34] apiserver oom_adj: -16
	I0903 23:41:41.432086  170958 kubeadm.go:593] duration metric: took 9.114807988s to restartPrimaryControlPlane
	I0903 23:41:41.432095  170958 kubeadm.go:394] duration metric: took 9.161904484s to StartCluster
	I0903 23:41:41.432119  170958 settings.go:142] acquiring lock: {Name:mkb1ef9c34f4ee762bb1ce9c74e3b8a2e234a4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:41:41.432203  170958 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:41.432781  170958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:41:41.433022  170958 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.245 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:41:41.433080  170958 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0903 23:41:41.433178  170958 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-959437"
	I0903 23:41:41.433201  170958 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-959437"
	I0903 23:41:41.433209  170958 addons.go:69] Setting default-storageclass=true in profile "newest-cni-959437"
	W0903 23:41:41.433214  170958 addons.go:247] addon storage-provisioner should already be in state true
	I0903 23:41:41.433225  170958 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-959437"
	I0903 23:41:41.433242  170958 addons.go:69] Setting dashboard=true in profile "newest-cni-959437"
	I0903 23:41:41.433250  170958 addons.go:69] Setting metrics-server=true in profile "newest-cni-959437"
	I0903 23:41:41.433275  170958 addons.go:238] Setting addon dashboard=true in "newest-cni-959437"
	W0903 23:41:41.433295  170958 addons.go:247] addon dashboard should already be in state true
	I0903 23:41:41.433335  170958 host.go:66] Checking if "newest-cni-959437" exists ...
	I0903 23:41:41.433277  170958 addons.go:238] Setting addon metrics-server=true in "newest-cni-959437"
	I0903 23:41:41.433346  170958 config.go:182] Loaded profile config "newest-cni-959437": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	W0903 23:41:41.433378  170958 addons.go:247] addon metrics-server should already be in state true
	I0903 23:41:41.433426  170958 host.go:66] Checking if "newest-cni-959437" exists ...
	I0903 23:41:41.433255  170958 host.go:66] Checking if "newest-cni-959437" exists ...
	I0903 23:41:41.433675  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.433712  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.433772  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.433798  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.433808  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.433848  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.433887  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.433912  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.434837  170958 out.go:179] * Verifying Kubernetes components...
	I0903 23:41:41.436387  170958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:41:41.449825  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
	I0903 23:41:41.450081  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0903 23:41:41.450454  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.450520  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.450988  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.451016  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.451109  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.451131  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.451377  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.451508  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.451977  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.452002  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.452804  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.452829  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.453490  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0903 23:41:41.453559  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0903 23:41:41.453881  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.454048  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.454363  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.454384  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.454508  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.454524  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.454778  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.454849  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.455308  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.455356  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.455691  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:41.458460  170958 addons.go:238] Setting addon default-storageclass=true in "newest-cni-959437"
	W0903 23:41:41.458484  170958 addons.go:247] addon default-storageclass should already be in state true
	I0903 23:41:41.458513  170958 host.go:66] Checking if "newest-cni-959437" exists ...
	I0903 23:41:41.458868  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.458916  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.468360  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0903 23:41:41.468441  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0903 23:41:41.469473  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0903 23:41:41.494050  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.494072  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.494258  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.494703  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.494735  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.494706  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.494762  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.494782  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.494807  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.495130  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.495161  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.495132  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.495350  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:41.495356  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:41.495361  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:41.497348  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:41.497411  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:41.497775  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:41.499313  170958 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0903 23:41:41.499318  170958 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:41:41.499316  170958 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0903 23:41:41.500586  170958 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 23:41:41.500603  170958 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 23:41:41.500625  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:41.500645  170958 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:41:41.500664  170958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:41:41.500684  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:41.501581  170958 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0903 23:41:41.502500  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0903 23:41:41.502518  170958 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0903 23:41:41.502537  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:41.504261  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.504539  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.504953  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:41.504986  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.505019  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:41.505032  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.505116  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:41.505267  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:41.505267  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:41.505445  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:41.505465  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:41.505624  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:41.505644  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:41.505774  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:41.506068  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.506497  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:41.506527  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.506721  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:41.506882  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:41.507048  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:41.507166  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:41.512374  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0903 23:41:41.512792  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.513218  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.513249  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.513638  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.514076  170958 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:41.514120  170958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:41.529319  170958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0903 23:41:41.529723  170958 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:41.530182  170958 main.go:141] libmachine: Using API Version  1
	I0903 23:41:41.530200  170958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:41.530553  170958 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:41.530778  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetState
	I0903 23:41:41.532363  170958 main.go:141] libmachine: (newest-cni-959437) Calling .DriverName
	I0903 23:41:41.532568  170958 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:41:41.532582  170958 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:41:41.532613  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHHostname
	I0903 23:41:41.534986  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.535344  170958 main.go:141] libmachine: (newest-cni-959437) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:37:ed", ip: ""} in network mk-newest-cni-959437: {Iface:virbr4 ExpiryTime:2025-09-04 00:41:18 +0000 UTC Type:0 Mac:52:54:00:7b:37:ed Iaid: IPaddr:192.168.72.245 Prefix:24 Hostname:newest-cni-959437 Clientid:01:52:54:00:7b:37:ed}
	I0903 23:41:41.535380  170958 main.go:141] libmachine: (newest-cni-959437) DBG | domain newest-cni-959437 has defined IP address 192.168.72.245 and MAC address 52:54:00:7b:37:ed in network mk-newest-cni-959437
	I0903 23:41:41.535527  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHPort
	I0903 23:41:41.535671  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHKeyPath
	I0903 23:41:41.535793  170958 main.go:141] libmachine: (newest-cni-959437) Calling .GetSSHUsername
	I0903 23:41:41.535916  170958 sshutil.go:53] new ssh client: &{IP:192.168.72.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/newest-cni-959437/id_rsa Username:docker}
	I0903 23:41:41.683389  170958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:41:41.712054  170958 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:41:41.712127  170958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:41:41.730121  170958 api_server.go:72] duration metric: took 297.057356ms to wait for apiserver process to appear ...
	I0903 23:41:41.730159  170958 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:41:41.730184  170958 api_server.go:253] Checking apiserver healthz at https://192.168.72.245:8443/healthz ...
	I0903 23:41:41.737036  170958 api_server.go:279] https://192.168.72.245:8443/healthz returned 200:
	ok
	I0903 23:41:41.738069  170958 api_server.go:141] control plane version: v1.34.0
	I0903 23:41:41.738100  170958 api_server.go:131] duration metric: took 7.93255ms to wait for apiserver health ...
	I0903 23:41:41.738113  170958 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:41:41.741446  170958 system_pods.go:59] 8 kube-system pods found
	I0903 23:41:41.741474  170958 system_pods.go:61] "coredns-66bc5c9577-pdqqg" [c9313065-4a7b-40c1-ba40-407e7ddb98f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:41:41.741481  170958 system_pods.go:61] "etcd-newest-cni-959437" [79d67b76-709f-4013-a77a-744dd8281e7d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0903 23:41:41.741489  170958 system_pods.go:61] "kube-apiserver-newest-cni-959437" [c2bc9862-6e8e-484c-9bac-3adf1dc90b3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0903 23:41:41.741496  170958 system_pods.go:61] "kube-controller-manager-newest-cni-959437" [42d5d8bf-3440-44d1-89f2-42c5edf652e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0903 23:41:41.741500  170958 system_pods.go:61] "kube-proxy-mhlbb" [e1083d3c-ec99-45cd-ab69-34fc23197e1e] Running
	I0903 23:41:41.741505  170958 system_pods.go:61] "kube-scheduler-newest-cni-959437" [faf7ae74-6306-4d5d-a67a-2ff07286a7ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0903 23:41:41.741510  170958 system_pods.go:61] "metrics-server-746fcd58dc-x5lzt" [a60ee5d1-9505-4d20-87fc-606a4f7b63ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:41:41.741514  170958 system_pods.go:61] "storage-provisioner" [17fe655c-ef6c-40f2-a9cc-dd0f56d316c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:41:41.741519  170958 system_pods.go:74] duration metric: took 3.39988ms to wait for pod list to return data ...
	I0903 23:41:41.741541  170958 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:41:41.744761  170958 default_sa.go:45] found service account: "default"
	I0903 23:41:41.744777  170958 default_sa.go:55] duration metric: took 3.228958ms for default service account to be created ...
	I0903 23:41:41.744787  170958 kubeadm.go:578] duration metric: took 311.734751ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0903 23:41:41.744800  170958 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:41:41.747456  170958 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0903 23:41:41.747475  170958 node_conditions.go:123] node cpu capacity is 2
	I0903 23:41:41.747483  170958 node_conditions.go:105] duration metric: took 2.680064ms to run NodePressure ...
	I0903 23:41:41.747493  170958 start.go:241] waiting for startup goroutines ...
	I0903 23:41:41.769077  170958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:41:41.772673  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0903 23:41:41.772700  170958 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0903 23:41:41.807234  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0903 23:41:41.807274  170958 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0903 23:41:41.830648  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0903 23:41:41.830675  170958 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0903 23:41:41.878026  170958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:41:41.885538  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0903 23:41:41.885570  170958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0903 23:41:41.896410  170958 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:41:41.896442  170958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0903 23:41:41.932716  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0903 23:41:41.932751  170958 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0903 23:41:41.945563  170958 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:41:41.945595  170958 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:41:41.975280  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0903 23:41:41.975309  170958 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0903 23:41:41.992897  170958 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:41:41.992923  170958 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:41:42.008528  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:42.008559  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:42.008900  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Closing plugin on server side
	I0903 23:41:42.008951  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:42.008970  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:42.008982  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:42.008990  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:42.009215  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:42.009233  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:42.009232  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Closing plugin on server side
	I0903 23:41:42.023587  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:42.023608  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:42.023897  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:42.023916  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:42.024102  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0903 23:41:42.024113  170958 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0903 23:41:42.060389  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0903 23:41:42.060417  170958 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0903 23:41:42.061810  170958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:41:42.194579  170958 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:41:42.194620  170958 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0903 23:41:42.261244  170958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0903 23:41:43.252922  170958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.374841969s)
	I0903 23:41:43.252984  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.252998  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.253333  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.253355  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.253365  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.253373  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.253645  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.253667  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.433602  170958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371747916s)
	I0903 23:41:43.433675  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.433696  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.434131  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.434151  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.434161  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.434170  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.434131  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Closing plugin on server side
	I0903 23:41:43.434457  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.434475  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.434491  170958 addons.go:479] Verifying addon metrics-server=true in "newest-cni-959437"
	I0903 23:41:43.610091  170958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.348782378s)
	I0903 23:41:43.610172  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.610192  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.610564  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Closing plugin on server side
	I0903 23:41:43.610613  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.610626  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.610643  170958 main.go:141] libmachine: Making call to close driver server
	I0903 23:41:43.610653  170958 main.go:141] libmachine: (newest-cni-959437) Calling .Close
	I0903 23:41:43.610994  170958 main.go:141] libmachine: (newest-cni-959437) DBG | Closing plugin on server side
	I0903 23:41:43.610994  170958 main.go:141] libmachine: Successfully made call to close driver server
	I0903 23:41:43.611015  170958 main.go:141] libmachine: Making call to close connection to plugin binary
	I0903 23:41:43.612599  170958 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-959437 addons enable metrics-server
	
	I0903 23:41:43.613827  170958 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0903 23:41:43.614903  170958 addons.go:514] duration metric: took 2.181821943s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0903 23:41:43.614951  170958 start.go:246] waiting for cluster config update ...
	I0903 23:41:43.614967  170958 start.go:255] writing updated cluster config ...
	I0903 23:41:43.615294  170958 ssh_runner.go:195] Run: rm -f paused
	I0903 23:41:43.679262  170958 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:41:43.680780  170958 out.go:179] * Done! kubectl is now configured to use "newest-cni-959437" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.650244181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942912650223459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2a01519-39be-453b-9a04-90e5e1a42ac0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.650801992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54080881-78f0-4e05-9578-8245e7388d81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.650861761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54080881-78f0-4e05-9578-8245e7388d81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.650893862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=54080881-78f0-4e05-9578-8245e7388d81 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.683056971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43261177-d600-4108-bde8-05d136972b53 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.683140224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43261177-d600-4108-bde8-05d136972b53 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.684115410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=038cfe6b-8f4a-42ba-832f-7d470fd30477 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.684500177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942912684480764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=038cfe6b-8f4a-42ba-832f-7d470fd30477 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.685107360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26dac2fd-5c22-40dc-92ab-5b1b4f4a8fec name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.685177534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26dac2fd-5c22-40dc-92ab-5b1b4f4a8fec name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.685216204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26dac2fd-5c22-40dc-92ab-5b1b4f4a8fec name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.717529745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efd76ca9-718e-490a-9e6f-747c8638ecf3 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.717614203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efd76ca9-718e-490a-9e6f-747c8638ecf3 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.718989179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4388d85c-152e-4470-b948-2614f7ec9124 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.719428529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942912719405253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4388d85c-152e-4470-b948-2614f7ec9124 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.720148096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=778f4c0f-b47a-4265-92a1-6b0a3a1c98ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.720288571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=778f4c0f-b47a-4265-92a1-6b0a3a1c98ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.720323422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=778f4c0f-b47a-4265-92a1-6b0a3a1c98ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.755358886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ab648a3-ba43-4b44-b193-d96e4bfaae10 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.755663076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ab648a3-ba43-4b44-b193-d96e4bfaae10 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.756725867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e55e5cc-3786-4b64-b637-47e03100af74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.757141646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756942912757121157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e55e5cc-3786-4b64-b637-47e03100af74 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.757582259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92917f9b-db2b-4b7d-9f5f-4e2b11df7ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.757645158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92917f9b-db2b-4b7d-9f5f-4e2b11df7ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:41:52 old-k8s-version-335468 crio[824]: time="2025-09-03 23:41:52.757681169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92917f9b-db2b-4b7d-9f5f-4e2b11df7ff5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.017584] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.215007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089265] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.110682] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.144101] kauditd_printk_skb: 18 callbacks suppressed
	[Sep 3 23:36] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> kernel <==
	 23:41:52 up 6 min,  0 users,  load average: 0.05, 0.08, 0.06
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000d7c380, 0x4f04d00, 0xc00044c5d0)
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000c6b6f0)
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000dd5ef0, 0x4f0ac20, 0xc000c220a0, 0x1, 0xc0001000c0)
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000d7c380, 0xc0001000c0)
	Sep 03 23:41:51 old-k8s-version-335468 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c3e200, 0xc000ca56e0)
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2596]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 03 23:41:51 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 40.
	Sep 03 23:41:51 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2606]: I0903 23:41:51.907818    2606 server.go:416] Version: v1.20.0
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2606]: I0903 23:41:51.908323    2606 server.go:837] Client rotation is on, will bootstrap in background
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2606]: I0903 23:41:51.910314    2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2606]: W0903 23:41:51.911312    2606 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 03 23:41:51 old-k8s-version-335468 kubelet[2606]: I0903 23:41:51.911472    2606 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 6 (232.268187ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:41:53.221060  171777 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (112.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (513.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0903 23:42:01.591814  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.598258  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.609633  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.631049  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.672500  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.754035  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:01.915610  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:02.237219  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:02.878635  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:04.160575  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:06.077613  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:06.721933  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:11.844160  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:20.174161  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:22.086250  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:26.245840  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:33.443633  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:42.568295  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:46.320312  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.518015  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.524403  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.535822  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.557307  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.598730  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.680264  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:48.841886  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:49.163226  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:49.804822  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:50.184735  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:51.086823  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:53.648518  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:42:58.770063  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:09.012366  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:14.023321  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:23.530328  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:27.999354  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:29.493683  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:37.764440  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:42.096194  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:43:59.139100  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:05.469596  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:10.455995  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:12.839595  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:40.541602  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:45.452046  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:44:49.581055  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:06.323396  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:17.285773  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:32.378336  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:34.026121  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:44.140313  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:45:58.234199  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:46:03.161493  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:46:11.840979  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:46:25.938211  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:47:01.591849  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:47:29.293512  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:47:46.321869  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:47:48.517798  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:48:16.221014  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:48:37.763958  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:48:59.139093  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:49:12.839307  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:49:49.580678  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:50:06.323241  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:50:22.210336  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m31.57773025s)

                                                
                                                
-- stdout --
	* [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:41:58.777140  171911 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:41:58.777406  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777416  171911 out.go:374] Setting ErrFile to fd 2...
	I0903 23:41:58.777422  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777607  171911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:41:58.778141  171911 out.go:368] Setting JSON to false
	I0903 23:41:58.779000  171911 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8663,"bootTime":1756934256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:41:58.779090  171911 start.go:140] virtualization: kvm guest
	I0903 23:41:58.781253  171911 out.go:179] * [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:41:58.782571  171911 notify.go:220] Checking for updates...
	I0903 23:41:58.782584  171911 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:41:58.783694  171911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:41:58.784604  171911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:58.785686  171911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:41:58.786886  171911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:41:58.787874  171911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:41:58.789111  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:41:58.789531  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.789581  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.804713  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0903 23:41:58.805180  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.805760  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.805799  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.806176  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.806424  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.808193  171911 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0903 23:41:58.809451  171911 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:41:58.809758  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.809795  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.825067  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0903 23:41:58.825609  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.826091  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.826116  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.826506  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.826651  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.862143  171911 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:41:58.863156  171911 start.go:304] selected driver: kvm2
	I0903 23:41:58.863168  171911 start.go:918] validating driver "kvm2" against &{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.863278  171911 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:41:58.863960  171911 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.864040  171911 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:41:58.879770  171911 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:41:58.880346  171911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:41:58.880393  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:41:58.880445  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:58.880503  171911 start.go:348] cluster config:
	{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.880659  171911 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.882387  171911 out.go:179] * Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	I0903 23:41:58.883545  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:41:58.883582  171911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:41:58.883591  171911 cache.go:58] Caching tarball of preloaded images
	I0903 23:41:58.883679  171911 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:41:58.883689  171911 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:41:58.883774  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:41:58.883966  171911 start.go:360] acquireMachinesLock for old-k8s-version-335468: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:41:58.884013  171911 start.go:364] duration metric: took 27.848µs to acquireMachinesLock for "old-k8s-version-335468"
	I0903 23:41:58.884027  171911 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:41:58.884034  171911 fix.go:54] fixHost starting: 
	I0903 23:41:58.884290  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.884339  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.899629  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0903 23:41:58.900295  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.901063  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.901090  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.901496  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.901698  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.901857  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetState
	I0903 23:41:58.903463  171911 fix.go:112] recreateIfNeeded on old-k8s-version-335468: state=Stopped err=<nil>
	I0903 23:41:58.903488  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	W0903 23:41:58.903630  171911 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:41:58.905426  171911 out.go:252] * Restarting existing kvm2 VM for "old-k8s-version-335468" ...
	I0903 23:41:58.905455  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .Start
	I0903 23:41:58.905612  171911 main.go:141] libmachine: (old-k8s-version-335468) starting domain...
	I0903 23:41:58.905634  171911 main.go:141] libmachine: (old-k8s-version-335468) ensuring networks are active...
	I0903 23:41:58.906424  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network default is active
	I0903 23:41:58.906730  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network mk-old-k8s-version-335468 is active
	I0903 23:41:58.907059  171911 main.go:141] libmachine: (old-k8s-version-335468) getting domain XML...
	I0903 23:41:58.907800  171911 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:42:00.140356  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for IP...
	I0903 23:42:00.141202  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.141582  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.141709  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.141590  171947 retry.go:31] will retry after 276.832755ms: waiting for domain to come up
	I0903 23:42:00.420407  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.420855  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.420917  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.420836  171947 retry.go:31] will retry after 314.668622ms: waiting for domain to come up
	I0903 23:42:00.737468  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.737871  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.737901  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.737828  171947 retry.go:31] will retry after 345.8826ms: waiting for domain to come up
	I0903 23:42:01.085701  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.086185  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.086217  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.086168  171947 retry.go:31] will retry after 426.296812ms: waiting for domain to come up
	I0903 23:42:01.513991  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.514453  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.514482  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.514426  171947 retry.go:31] will retry after 602.972692ms: waiting for domain to come up
	I0903 23:42:02.119438  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.119856  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.119885  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.119827  171947 retry.go:31] will retry after 798.351499ms: waiting for domain to come up
	I0903 23:42:02.919839  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.920276  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.920307  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.920220  171947 retry.go:31] will retry after 1.022190105s: waiting for domain to come up
	I0903 23:42:03.944354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:03.944807  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:03.944840  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:03.944747  171947 retry.go:31] will retry after 1.29364095s: waiting for domain to come up
	I0903 23:42:05.240165  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:05.240547  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:05.240578  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:05.240525  171947 retry.go:31] will retry after 1.368503788s: waiting for domain to come up
	I0903 23:42:06.611109  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:06.611618  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:06.611652  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:06.611578  171947 retry.go:31] will retry after 2.084047059s: waiting for domain to come up
	I0903 23:42:08.698604  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:08.699065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:08.699089  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:08.699048  171947 retry.go:31] will retry after 2.491740737s: waiting for domain to come up
	I0903 23:42:11.193535  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:11.194024  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:11.194066  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:11.194000  171947 retry.go:31] will retry after 2.442590545s: waiting for domain to come up
	I0903 23:42:13.638462  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:13.638791  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:13.638812  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:13.638754  171947 retry.go:31] will retry after 4.493184117s: waiting for domain to come up
	I0903 23:42:18.134025  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134463  171911 main.go:141] libmachine: (old-k8s-version-335468) found domain IP: 192.168.61.80
	I0903 23:42:18.134496  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has current primary IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134511  171911 main.go:141] libmachine: (old-k8s-version-335468) reserving static IP address...
	I0903 23:42:18.134886  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.134919  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | skip adding static IP to network mk-old-k8s-version-335468 - found existing host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"}
	I0903 23:42:18.134935  171911 main.go:141] libmachine: (old-k8s-version-335468) reserved static IP address 192.168.61.80 for domain old-k8s-version-335468
	I0903 23:42:18.134949  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for SSH...
	I0903 23:42:18.134965  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:42:18.137067  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137412  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.137435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137591  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:42:18.137615  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:42:18.137661  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:42:18.137678  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:42:18.137689  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:42:18.265417  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: <nil>: 
	I0903 23:42:18.265809  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:42:18.266396  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.269013  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269322  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.269352  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269559  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:42:18.269795  171911 machine.go:93] provisionDockerMachine start ...
	I0903 23:42:18.269824  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:18.270044  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.272246  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272543  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.272584  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272665  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.272846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.272997  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.273116  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.273294  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.273564  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.273578  171911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:42:18.389858  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:42:18.389891  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390184  171911 buildroot.go:166] provisioning hostname "old-k8s-version-335468"
	I0903 23:42:18.390213  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390400  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.393065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393474  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.393508  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393629  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.393787  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.393963  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.394113  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.394288  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.394494  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.394507  171911 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-335468 && echo "old-k8s-version-335468" | sudo tee /etc/hostname
	I0903 23:42:18.526146  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-335468
	
	I0903 23:42:18.526174  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.528979  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529317  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.529341  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529521  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.529715  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.529887  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.530039  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.530198  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.530443  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.530462  171911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-335468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-335468/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-335468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:42:18.655502  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:42:18.655540  171911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:42:18.655578  171911 buildroot.go:174] setting up certificates
	I0903 23:42:18.655591  171911 provision.go:84] configureAuth start
	I0903 23:42:18.655604  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.655930  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.658889  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659364  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.659393  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659574  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.661700  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.661987  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.662012  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.662134  171911 provision.go:143] copyHostCerts
	I0903 23:42:18.662197  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:42:18.662222  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:42:18.662298  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:42:18.662418  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:42:18.662431  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:42:18.662468  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:42:18.662563  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:42:18.662573  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:42:18.662606  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:42:18.662675  171911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-335468 san=[127.0.0.1 192.168.61.80 localhost minikube old-k8s-version-335468]
	I0903 23:42:18.981415  171911 provision.go:177] copyRemoteCerts
	I0903 23:42:18.981472  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:42:18.981497  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.983969  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984256  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.984285  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984430  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.984657  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.984813  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.984946  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.073026  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:42:19.100256  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0903 23:42:19.127225  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:42:19.154111  171911 provision.go:87] duration metric: took 498.506096ms to configureAuth
	I0903 23:42:19.154138  171911 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:42:19.154358  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:42:19.154450  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.157159  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157588  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.157613  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157774  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.157993  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158192  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158345  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.158511  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.158713  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.158727  171911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:42:19.403450  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:42:19.403503  171911 machine.go:96] duration metric: took 1.133688609s to provisionDockerMachine
	I0903 23:42:19.403516  171911 start.go:293] postStartSetup for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:42:19.403546  171911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:42:19.403575  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.403961  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:42:19.403992  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.406435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406792  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.406820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406954  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.407146  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.407310  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.407431  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.498010  171911 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:42:19.502446  171911 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:42:19.502472  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:42:19.502533  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:42:19.502606  171911 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:42:19.502691  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:42:19.513148  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:19.539923  171911 start.go:296] duration metric: took 136.378767ms for postStartSetup
	I0903 23:42:19.539966  171911 fix.go:56] duration metric: took 20.655932447s for fixHost
	I0903 23:42:19.539987  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.542771  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543135  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.543163  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543432  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.543661  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.543924  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.544083  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.544239  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.544450  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.544464  171911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:42:19.658283  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942939.619184337
	
	I0903 23:42:19.658310  171911 fix.go:216] guest clock: 1756942939.619184337
	I0903 23:42:19.658320  171911 fix.go:229] Guest: 2025-09-03 23:42:19.619184337 +0000 UTC Remote: 2025-09-03 23:42:19.539969783 +0000 UTC m=+20.799287975 (delta=79.214554ms)
	I0903 23:42:19.658340  171911 fix.go:200] guest clock delta is within tolerance: 79.214554ms
	I0903 23:42:19.658346  171911 start.go:83] releasing machines lock for "old-k8s-version-335468", held for 20.774323746s
	I0903 23:42:19.658367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.658686  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:19.661465  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.661820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.661848  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.662028  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662525  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662702  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662785  171911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:42:19.662846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.662927  171911 ssh_runner.go:195] Run: cat /version.json
	I0903 23:42:19.662943  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.665354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665683  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665718  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.665740  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665938  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666142  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.666154  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666167  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.666342  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666528  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666520  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.666673  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666795  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.778070  171911 ssh_runner.go:195] Run: systemctl --version
	I0903 23:42:19.783809  171911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:42:19.925729  171911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:42:19.931814  171911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:42:19.931870  171911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:42:19.950008  171911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:42:19.950038  171911 start.go:495] detecting cgroup driver to use...
	I0903 23:42:19.950104  171911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:42:19.969078  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:42:19.984800  171911 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:42:19.984862  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:42:19.999909  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:42:20.014636  171911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:42:20.158742  171911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:42:20.297981  171911 docker.go:234] disabling docker service ...
	I0903 23:42:20.298074  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:42:20.314384  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:42:20.327885  171911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:42:20.530158  171911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:42:20.665612  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:42:20.680150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:42:20.700792  171911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:42:20.700857  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.712182  171911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:42:20.712258  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.723777  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.734863  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.746438  171911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:42:20.759910  171911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:42:20.769436  171911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:42:20.769493  171911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:42:20.788756  171911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:42:20.799437  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:20.954989  171911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:42:21.072550  171911 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:42:21.072649  171911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:42:21.077536  171911 start.go:563] Will wait 60s for crictl version
	I0903 23:42:21.077592  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:21.081093  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:42:21.119015  171911 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:42:21.119097  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.146341  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.176700  171911 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:42:21.177731  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:21.180269  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180568  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:21.180599  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180856  171911 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:42:21.185094  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:21.198784  171911 kubeadm.go:875] updating cluster {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:42:21.198887  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:42:21.198930  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:21.245403  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:21.245474  171911 ssh_runner.go:195] Run: which lz4
	I0903 23:42:21.249531  171911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:42:21.253934  171911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:42:21.253970  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:42:22.735338  171911 crio.go:462] duration metric: took 1.48583725s to copy over tarball
	I0903 23:42:22.735409  171911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:42:24.901192  171911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.165749867s)
	I0903 23:42:24.901224  171911 crio.go:469] duration metric: took 2.165856963s to extract the tarball
	I0903 23:42:24.901234  171911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:42:24.945210  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:24.977983  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:24.978011  171911 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:42:24.978093  171911 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:24.978095  171911 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.978122  171911 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.978134  171911 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.978092  171911 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.978167  171911 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.978180  171911 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.978151  171911 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979632  171911 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.979647  171911 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.979664  171911 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.979669  171911 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.979651  171911 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979683  171911 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.979708  171911 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.979715  171911 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:25.139789  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.149556  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.153427  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.156447  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.166085  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.178841  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.180227  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:42:25.223305  171911 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:42:25.223359  171911 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.223398  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.287785  171911 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:42:25.287834  171911 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.287879  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303285  171911 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:42:25.303336  171911 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.303345  171911 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:42:25.303383  171911 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.303392  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303431  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311751  171911 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:42:25.311798  171911 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.311803  171911 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:42:25.311842  171911 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.311855  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311888  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324120  171911 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:42:25.324164  171911 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:42:25.324187  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.324202  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324241  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.324655  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.324678  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.324906  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.325033  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.422314  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.422412  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.436779  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.479512  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.482280  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.482370  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.482417  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.528977  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.529015  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.566801  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.639814  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.639829  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.680104  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:42:25.680249  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.680257  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:42:25.724922  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:42:25.747501  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:42:25.747577  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:42:25.751768  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:42:25.760936  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:42:26.285671  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:26.426376  171911 cache_images.go:93] duration metric: took 1.448344647s to LoadCachedImages
	W0903 23:42:26.426480  171911 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0903 23:42:26.426499  171911 kubeadm.go:926] updating node { 192.168.61.80 8443 v1.20.0 crio true true} ...
	I0903 23:42:26.426618  171911 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-335468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:42:26.426702  171911 ssh_runner.go:195] Run: crio config
	I0903 23:42:26.476895  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:42:26.476919  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:42:26.476933  171911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:42:26.476956  171911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-335468 NodeName:old-k8s-version-335468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:42:26.477114  171911 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-335468"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:42:26.477233  171911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:42:26.490694  171911 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:42:26.490775  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:42:26.501798  171911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0903 23:42:26.520806  171911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:42:26.539068  171911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0903 23:42:26.558168  171911 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0903 23:42:26.562134  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:26.575449  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:26.711961  171911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:42:26.759354  171911 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468 for IP: 192.168.61.80
	I0903 23:42:26.759380  171911 certs.go:194] generating shared ca certs ...
	I0903 23:42:26.759407  171911 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:26.759577  171911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:42:26.759632  171911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:42:26.759646  171911 certs.go:256] generating profile certs ...
	I0903 23:42:26.759743  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key
	I0903 23:42:26.759820  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629
	I0903 23:42:26.759878  171911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key
	I0903 23:42:26.760013  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:42:26.760052  171911 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:42:26.760066  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:42:26.760099  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:42:26.760133  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:42:26.760167  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:42:26.760220  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:26.760811  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:42:26.791932  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:42:26.824575  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:42:26.853358  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:42:26.887411  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:42:26.914421  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:42:26.940984  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:42:26.968279  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:42:26.995059  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:42:27.023211  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:42:27.049929  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:42:27.076578  171911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:42:27.095209  171911 ssh_runner.go:195] Run: openssl version
	I0903 23:42:27.100879  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:42:27.112933  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118040  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118090  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.125341  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:42:27.140002  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:42:27.154488  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159574  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159635  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.166580  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:42:27.180666  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:42:27.194853  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199793  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199841  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.206851  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:42:27.221163  171911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:42:27.226347  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:42:27.233982  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:42:27.241290  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:42:27.248464  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:42:27.255916  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:42:27.263308  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:42:27.270533  171911 kubeadm.go:392] StartCluster: {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:42:27.270648  171911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:42:27.270739  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.306525  171911 cri.go:89] found id: ""
	I0903 23:42:27.306598  171911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:42:27.318570  171911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:42:27.318592  171911 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:42:27.318639  171911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:42:27.329789  171911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:42:27.330196  171911 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:42:27.330362  171911 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-335468" cluster setting kubeconfig missing "old-k8s-version-335468" context setting]
	I0903 23:42:27.330702  171911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:27.374758  171911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:42:27.386214  171911 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.61.80
	I0903 23:42:27.386258  171911 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:42:27.386272  171911 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:42:27.386331  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.425149  171911 cri.go:89] found id: ""
	I0903 23:42:27.425215  171911 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:42:27.445596  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:42:27.456478  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:42:27.456499  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:42:27.456562  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:42:27.466434  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:42:27.466490  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:42:27.477542  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:42:27.487494  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:42:27.487556  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:42:27.498329  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.508036  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:42:27.508096  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.521941  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:42:27.531852  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:42:27.531907  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:42:27.542155  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:42:27.553239  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:27.633226  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.602124  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.854495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.947073  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:29.027974  171911 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:42:29.028070  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:29.528786  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.029080  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.529093  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.029115  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.528486  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.029181  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.528450  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.028477  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.529071  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.028981  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.528195  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.028453  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.528706  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.028199  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.528759  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.528169  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.528882  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.028560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.528880  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.029029  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.528664  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.028784  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.528383  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.028492  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.528853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.028647  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.528940  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.028219  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.528661  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.029081  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.528521  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.028610  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.529168  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.028585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.528452  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.028847  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.528533  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.028538  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.529012  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.029175  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.528266  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.028443  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.528936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.028174  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.528782  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.028946  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.529016  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.029217  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.528827  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.028743  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.528564  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.029013  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.528850  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.028379  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.528543  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.028863  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.528547  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.028618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.528316  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.028825  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.528728  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.028929  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.528618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.028774  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.528830  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.028902  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.528997  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.028460  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.529085  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.028814  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.528240  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.028382  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.528648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.028776  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.528630  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.028650  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.528498  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.028874  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.529055  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.028335  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.528817  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.029166  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.528517  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.028284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.528580  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.028324  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.528516  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.028872  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.529100  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.029032  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.528427  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.028297  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.528182  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.028871  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.528931  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.028363  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.528960  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.028522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.528560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.028879  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.528155  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.028536  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.528372  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.028985  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.529094  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.028627  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.529025  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.028457  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.528968  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.028323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.528323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.028859  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.528886  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.028648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.528292  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.028496  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.528556  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:29.028482  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:29.028567  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:29.065203  171911 cri.go:89] found id: ""
	I0903 23:43:29.065238  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.065249  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:29.065257  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:29.065323  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:29.099969  171911 cri.go:89] found id: ""
	I0903 23:43:29.100008  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.100020  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:29.100030  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:29.100100  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:29.134038  171911 cri.go:89] found id: ""
	I0903 23:43:29.134075  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.134088  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:29.134096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:29.134166  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:29.167976  171911 cri.go:89] found id: ""
	I0903 23:43:29.168009  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.168018  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:29.168025  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:29.168081  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:29.203375  171911 cri.go:89] found id: ""
	I0903 23:43:29.203406  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.203414  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:29.203420  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:29.203487  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:29.237316  171911 cri.go:89] found id: ""
	I0903 23:43:29.237347  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.237358  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:29.237366  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:29.237456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:29.271010  171911 cri.go:89] found id: ""
	I0903 23:43:29.271036  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.271044  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:29.271051  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:29.271115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:29.305355  171911 cri.go:89] found id: ""
	I0903 23:43:29.305398  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.305410  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:29.305424  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:29.305450  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:29.343610  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:29.343647  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:29.390474  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:29.390513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:29.404227  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:29.404255  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:29.473354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:29.473377  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:29.473409  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.045578  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:32.064442  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:32.064510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:32.104125  171911 cri.go:89] found id: ""
	I0903 23:43:32.104153  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.104162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:32.104167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:32.104219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:32.140304  171911 cri.go:89] found id: ""
	I0903 23:43:32.140344  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.140357  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:32.140366  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:32.140436  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:32.174194  171911 cri.go:89] found id: ""
	I0903 23:43:32.174227  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.174241  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:32.174249  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:32.174322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:32.207732  171911 cri.go:89] found id: ""
	I0903 23:43:32.207760  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.207768  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:32.207775  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:32.207828  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:32.242885  171911 cri.go:89] found id: ""
	I0903 23:43:32.242919  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.242927  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:32.242934  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:32.242991  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:32.276911  171911 cri.go:89] found id: ""
	I0903 23:43:32.276938  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.276945  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:32.276952  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:32.277004  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:32.310660  171911 cri.go:89] found id: ""
	I0903 23:43:32.310689  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.310697  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:32.310703  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:32.310753  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:32.344285  171911 cri.go:89] found id: ""
	I0903 23:43:32.344316  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.344327  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:32.344341  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:32.344357  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:32.394031  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:32.394079  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:32.408165  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:32.408199  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:32.473250  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:32.473279  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:32.473293  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.556677  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:32.556722  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.104790  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:35.121004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:35.121069  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:35.153087  171911 cri.go:89] found id: ""
	I0903 23:43:35.153118  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.153126  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:35.153133  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:35.153187  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:35.185837  171911 cri.go:89] found id: ""
	I0903 23:43:35.185877  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.185885  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:35.185891  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:35.185947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:35.219367  171911 cri.go:89] found id: ""
	I0903 23:43:35.219410  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.219421  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:35.219430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:35.219491  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:35.253170  171911 cri.go:89] found id: ""
	I0903 23:43:35.253204  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.253218  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:35.253239  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:35.253325  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:35.285565  171911 cri.go:89] found id: ""
	I0903 23:43:35.285599  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.285611  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:35.285620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:35.285688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:35.319446  171911 cri.go:89] found id: ""
	I0903 23:43:35.319476  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.319484  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:35.319490  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:35.319541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:35.354359  171911 cri.go:89] found id: ""
	I0903 23:43:35.354387  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.354394  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:35.354400  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:35.354452  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:35.390780  171911 cri.go:89] found id: ""
	I0903 23:43:35.390815  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.390825  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:35.390837  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:35.390852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:35.465751  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:35.465790  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.504480  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:35.504517  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:35.554283  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:35.554318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:35.567404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:35.567436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:35.629663  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.130296  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:38.146915  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:38.147003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:38.179729  171911 cri.go:89] found id: ""
	I0903 23:43:38.179768  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.179781  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:38.179791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:38.179863  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:38.212185  171911 cri.go:89] found id: ""
	I0903 23:43:38.212215  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.212227  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:38.212235  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:38.212322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:38.245927  171911 cri.go:89] found id: ""
	I0903 23:43:38.245953  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.245960  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:38.245966  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:38.246027  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:38.280868  171911 cri.go:89] found id: ""
	I0903 23:43:38.280900  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.280911  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:38.280918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:38.281003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:38.321240  171911 cri.go:89] found id: ""
	I0903 23:43:38.321275  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.321288  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:38.321298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:38.321407  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:38.375140  171911 cri.go:89] found id: ""
	I0903 23:43:38.375169  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.375183  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:38.375191  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:38.375277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:38.418890  171911 cri.go:89] found id: ""
	I0903 23:43:38.418928  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.418940  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:38.418950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:38.419019  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:38.452908  171911 cri.go:89] found id: ""
	I0903 23:43:38.452938  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.452949  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:38.452962  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:38.452978  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:38.503416  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:38.503460  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:38.517203  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:38.517233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:38.580070  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.580096  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:38.580110  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:38.652380  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:38.652420  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.192031  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:41.208483  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:41.208546  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:41.241854  171911 cri.go:89] found id: ""
	I0903 23:43:41.241880  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.241887  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:41.241895  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:41.241953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:41.276043  171911 cri.go:89] found id: ""
	I0903 23:43:41.276070  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.276078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:41.276084  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:41.276136  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:41.312473  171911 cri.go:89] found id: ""
	I0903 23:43:41.312503  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.312514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:41.312522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:41.312591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:41.345515  171911 cri.go:89] found id: ""
	I0903 23:43:41.345543  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.345551  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:41.345558  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:41.345630  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:41.378505  171911 cri.go:89] found id: ""
	I0903 23:43:41.378539  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.378547  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:41.378554  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:41.378613  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:41.414245  171911 cri.go:89] found id: ""
	I0903 23:43:41.414276  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.414284  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:41.414290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:41.414351  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:41.450931  171911 cri.go:89] found id: ""
	I0903 23:43:41.450969  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.450981  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:41.451050  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:41.451126  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:41.484869  171911 cri.go:89] found id: ""
	I0903 23:43:41.484898  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.484906  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:41.484916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:41.484934  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:41.498189  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:41.498219  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:41.560558  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:41.560583  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:41.560601  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:41.637195  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:41.637235  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.675448  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:41.675478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.223401  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:44.253341  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:44.253423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:44.300478  171911 cri.go:89] found id: ""
	I0903 23:43:44.300512  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.300523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:44.300531  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:44.300625  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:44.342127  171911 cri.go:89] found id: ""
	I0903 23:43:44.342158  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.342166  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:44.342178  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:44.342242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:44.392479  171911 cri.go:89] found id: ""
	I0903 23:43:44.392505  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.392514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:44.392522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:44.392587  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:44.428584  171911 cri.go:89] found id: ""
	I0903 23:43:44.428627  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.428646  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:44.428655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:44.428724  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:44.463165  171911 cri.go:89] found id: ""
	I0903 23:43:44.463196  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.463205  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:44.463214  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:44.463276  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:44.497562  171911 cri.go:89] found id: ""
	I0903 23:43:44.497599  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.497606  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:44.497616  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:44.497671  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:44.532319  171911 cri.go:89] found id: ""
	I0903 23:43:44.532349  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.532356  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:44.532371  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:44.532431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:44.567181  171911 cri.go:89] found id: ""
	I0903 23:43:44.567214  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.567229  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:44.567242  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:44.567259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:44.647186  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:44.647237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:44.684779  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:44.684815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.734346  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:44.734384  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:44.748304  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:44.748333  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:44.811995  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.313737  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:47.330976  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:47.331047  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:47.365152  171911 cri.go:89] found id: ""
	I0903 23:43:47.365183  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.365191  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:47.365198  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:47.365250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:47.402002  171911 cri.go:89] found id: ""
	I0903 23:43:47.402034  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.402042  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:47.402048  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:47.402103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:47.439574  171911 cri.go:89] found id: ""
	I0903 23:43:47.439611  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.439619  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:47.439626  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:47.439694  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:47.474877  171911 cri.go:89] found id: ""
	I0903 23:43:47.474910  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.474918  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:47.474925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:47.474980  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:47.511850  171911 cri.go:89] found id: ""
	I0903 23:43:47.511882  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.511889  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:47.511896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:47.511952  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:47.545975  171911 cri.go:89] found id: ""
	I0903 23:43:47.546011  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.546022  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:47.546032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:47.546091  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:47.581967  171911 cri.go:89] found id: ""
	I0903 23:43:47.581996  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.582004  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:47.582010  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:47.582079  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:47.617442  171911 cri.go:89] found id: ""
	I0903 23:43:47.617470  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.617478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:47.617487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:47.617499  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:47.655119  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:47.655150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:47.702001  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:47.702035  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:47.715671  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:47.715701  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:47.781271  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.781297  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:47.781310  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.353562  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:50.370200  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:50.370271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:50.404593  171911 cri.go:89] found id: ""
	I0903 23:43:50.404621  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.404631  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:50.404640  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:50.404714  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:50.438454  171911 cri.go:89] found id: ""
	I0903 23:43:50.438482  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.438491  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:50.438498  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:50.438609  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:50.474138  171911 cri.go:89] found id: ""
	I0903 23:43:50.474165  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.474176  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:50.474184  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:50.474247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:50.506277  171911 cri.go:89] found id: ""
	I0903 23:43:50.506308  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.506319  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:50.506328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:50.506398  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:50.540877  171911 cri.go:89] found id: ""
	I0903 23:43:50.540905  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.540912  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:50.540918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:50.540969  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:50.574490  171911 cri.go:89] found id: ""
	I0903 23:43:50.574548  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.574567  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:50.574578  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:50.574654  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:50.608197  171911 cri.go:89] found id: ""
	I0903 23:43:50.608225  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.608233  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:50.608238  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:50.608288  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:50.641053  171911 cri.go:89] found id: ""
	I0903 23:43:50.641082  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.641089  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:50.641099  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:50.641109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.712696  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:50.712742  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:50.749969  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:50.750001  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:50.800039  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:50.800074  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:50.813705  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:50.813736  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:50.876873  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.378585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:53.395927  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:53.395997  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:53.429784  171911 cri.go:89] found id: ""
	I0903 23:43:53.429814  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.429821  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:53.429827  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:53.429880  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:53.463718  171911 cri.go:89] found id: ""
	I0903 23:43:53.463745  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.463753  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:53.463759  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:53.463815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:53.499017  171911 cri.go:89] found id: ""
	I0903 23:43:53.499046  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.499056  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:53.499065  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:53.499132  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:53.534239  171911 cri.go:89] found id: ""
	I0903 23:43:53.534273  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.534283  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:53.534290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:53.534353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:53.567405  171911 cri.go:89] found id: ""
	I0903 23:43:53.567431  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.567438  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:53.567445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:53.567500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:53.603686  171911 cri.go:89] found id: ""
	I0903 23:43:53.603722  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.603733  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:53.603742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:53.603805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:53.638591  171911 cri.go:89] found id: ""
	I0903 23:43:53.638618  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.638627  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:53.638635  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:53.638698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:53.672243  171911 cri.go:89] found id: ""
	I0903 23:43:53.672288  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.672296  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:53.672305  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:53.672318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:53.721410  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:53.721448  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:53.735356  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:53.735386  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:53.797966  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.797988  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:53.798005  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:53.872491  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:53.872529  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.410853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:56.427796  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:56.427871  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:56.460023  171911 cri.go:89] found id: ""
	I0903 23:43:56.460066  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.460077  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:56.460085  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:56.460160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:56.494386  171911 cri.go:89] found id: ""
	I0903 23:43:56.494414  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.494424  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:56.494432  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:56.494492  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:56.529298  171911 cri.go:89] found id: ""
	I0903 23:43:56.529329  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.529339  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:56.529356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:56.529433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:56.562775  171911 cri.go:89] found id: ""
	I0903 23:43:56.562818  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.562830  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:56.562837  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:56.562898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:56.604698  171911 cri.go:89] found id: ""
	I0903 23:43:56.604739  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.604751  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:56.604758  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:56.604811  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:56.644278  171911 cri.go:89] found id: ""
	I0903 23:43:56.644307  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.644319  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:56.644328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:56.644397  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:56.686334  171911 cri.go:89] found id: ""
	I0903 23:43:56.686366  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.686377  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:56.686385  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:56.686458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:56.725441  171911 cri.go:89] found id: ""
	I0903 23:43:56.725466  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.725486  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:56.725494  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:56.725508  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:56.791969  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:56.792002  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:56.792021  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:56.866297  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:56.866338  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.904335  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:56.904372  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:56.952822  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:56.952863  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:59.466793  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:59.484556  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:59.484633  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:59.521818  171911 cri.go:89] found id: ""
	I0903 23:43:59.521848  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.521860  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:59.521868  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:59.521945  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:59.556474  171911 cri.go:89] found id: ""
	I0903 23:43:59.556501  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.556509  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:59.556515  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:59.556569  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:59.591410  171911 cri.go:89] found id: ""
	I0903 23:43:59.591440  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.591447  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:59.591453  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:59.591503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:59.625559  171911 cri.go:89] found id: ""
	I0903 23:43:59.625587  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.625593  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:59.625615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:59.625668  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:59.659603  171911 cri.go:89] found id: ""
	I0903 23:43:59.659635  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.659643  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:59.659655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:59.659713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:59.700514  171911 cri.go:89] found id: ""
	I0903 23:43:59.700553  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.700566  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:59.700576  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:59.700669  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:59.734778  171911 cri.go:89] found id: ""
	I0903 23:43:59.734805  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.734816  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:59.734824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:59.734884  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:59.769663  171911 cri.go:89] found id: ""
	I0903 23:43:59.769703  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.769714  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:59.769727  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:59.769743  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:59.832033  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:59.832056  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:59.832075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:59.905304  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:59.905348  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:59.942790  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:59.942823  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:59.992617  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:59.992660  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.508378  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:02.525572  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:02.525652  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:02.561330  171911 cri.go:89] found id: ""
	I0903 23:44:02.561361  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.561369  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:02.561375  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:02.561461  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:02.595933  171911 cri.go:89] found id: ""
	I0903 23:44:02.595962  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.595970  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:02.595975  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:02.596041  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:02.628817  171911 cri.go:89] found id: ""
	I0903 23:44:02.628854  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.628865  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:02.628873  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:02.628944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:02.665027  171911 cri.go:89] found id: ""
	I0903 23:44:02.665060  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.665072  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:02.665079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:02.665143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:02.698721  171911 cri.go:89] found id: ""
	I0903 23:44:02.698752  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.698761  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:02.698768  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:02.698822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:02.736138  171911 cri.go:89] found id: ""
	I0903 23:44:02.736170  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.736180  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:02.736188  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:02.736254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:02.770089  171911 cri.go:89] found id: ""
	I0903 23:44:02.770120  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.770127  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:02.770134  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:02.770201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:02.805595  171911 cri.go:89] found id: ""
	I0903 23:44:02.805627  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.805638  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:02.805650  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:02.805666  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:02.855714  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:02.855753  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.870817  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:02.870854  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:02.935987  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:02.936011  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:02.936025  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:03.013471  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:03.013513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:05.553522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:05.570805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:05.570869  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:05.606023  171911 cri.go:89] found id: ""
	I0903 23:44:05.606061  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.606075  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:05.606084  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:05.606151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:05.640331  171911 cri.go:89] found id: ""
	I0903 23:44:05.640362  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.640374  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:05.640380  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:05.640455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:05.675579  171911 cri.go:89] found id: ""
	I0903 23:44:05.675613  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.675626  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:05.675634  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:05.675698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:05.710190  171911 cri.go:89] found id: ""
	I0903 23:44:05.710219  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.710226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:05.710233  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:05.710292  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:05.745803  171911 cri.go:89] found id: ""
	I0903 23:44:05.745834  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.745843  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:05.745850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:05.745908  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:05.780095  171911 cri.go:89] found id: ""
	I0903 23:44:05.780126  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.780134  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:05.780141  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:05.780193  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:05.812816  171911 cri.go:89] found id: ""
	I0903 23:44:05.812849  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.812862  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:05.812870  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:05.812944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:05.845992  171911 cri.go:89] found id: ""
	I0903 23:44:05.846024  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.846032  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:05.846041  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:05.846053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:05.896122  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:05.896163  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:05.910777  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:05.910815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:05.973743  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:05.973771  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:05.973784  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:06.047880  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:06.047924  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.588751  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:08.605926  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:08.605989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:08.639229  171911 cri.go:89] found id: ""
	I0903 23:44:08.639260  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.639268  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:08.639275  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:08.639332  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:08.673218  171911 cri.go:89] found id: ""
	I0903 23:44:08.673263  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.673274  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:08.673283  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:08.673353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:08.708635  171911 cri.go:89] found id: ""
	I0903 23:44:08.708665  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.708676  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:08.708685  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:08.708755  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:08.744277  171911 cri.go:89] found id: ""
	I0903 23:44:08.744304  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.744311  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:08.744318  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:08.744385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:08.778421  171911 cri.go:89] found id: ""
	I0903 23:44:08.778451  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.778469  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:08.778477  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:08.778541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:08.815240  171911 cri.go:89] found id: ""
	I0903 23:44:08.815277  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.815290  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:08.815298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:08.815371  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:08.849900  171911 cri.go:89] found id: ""
	I0903 23:44:08.849929  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.849936  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:08.849942  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:08.849993  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:08.885596  171911 cri.go:89] found id: ""
	I0903 23:44:08.885631  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.885641  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:08.885651  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:08.885668  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.924882  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:08.924909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:08.976269  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:08.976304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:08.993447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:08.993483  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:09.069817  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:09.069845  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:09.069862  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:11.651779  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:11.668352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:11.668423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:11.703206  171911 cri.go:89] found id: ""
	I0903 23:44:11.703243  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.703255  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:11.703264  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:11.703357  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:11.737323  171911 cri.go:89] found id: ""
	I0903 23:44:11.737367  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.737380  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:11.737402  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:11.737479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:11.771970  171911 cri.go:89] found id: ""
	I0903 23:44:11.772010  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.772021  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:11.772030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:11.772104  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:11.806342  171911 cri.go:89] found id: ""
	I0903 23:44:11.806386  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.806397  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:11.806406  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:11.806483  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:11.843136  171911 cri.go:89] found id: ""
	I0903 23:44:11.843170  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.843181  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:11.843189  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:11.843259  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:11.877246  171911 cri.go:89] found id: ""
	I0903 23:44:11.877285  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.877296  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:11.877306  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:11.877379  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:11.915257  171911 cri.go:89] found id: ""
	I0903 23:44:11.915295  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.915308  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:11.915317  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:11.915396  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:11.949271  171911 cri.go:89] found id: ""
	I0903 23:44:11.949300  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.949310  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:11.949323  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:11.949342  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:11.962921  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:11.962954  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:12.025549  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:12.025580  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:12.025596  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:12.099077  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:12.099120  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:12.136408  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:12.136446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:14.686632  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:14.704032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:14.704101  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:14.739046  171911 cri.go:89] found id: ""
	I0903 23:44:14.739076  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.739084  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:14.739091  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:14.739156  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:14.775028  171911 cri.go:89] found id: ""
	I0903 23:44:14.775066  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.775078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:14.775087  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:14.775150  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:14.808896  171911 cri.go:89] found id: ""
	I0903 23:44:14.808928  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.808939  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:14.808947  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:14.809014  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:14.844967  171911 cri.go:89] found id: ""
	I0903 23:44:14.844998  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.845010  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:14.845018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:14.845087  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:14.878706  171911 cri.go:89] found id: ""
	I0903 23:44:14.878734  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.878742  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:14.878750  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:14.878824  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:14.914368  171911 cri.go:89] found id: ""
	I0903 23:44:14.914407  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.914420  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:14.914429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:14.914523  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:14.949846  171911 cri.go:89] found id: ""
	I0903 23:44:14.949873  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.949881  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:14.949888  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:14.949956  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:14.985479  171911 cri.go:89] found id: ""
	I0903 23:44:14.985511  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.985522  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:14.985534  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:14.985550  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:15.036097  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:15.036141  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:15.050336  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:15.050365  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:15.116416  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:15.116439  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:15.116457  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:15.193453  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:15.193498  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:17.731284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:17.748791  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:17.748854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:17.784857  171911 cri.go:89] found id: ""
	I0903 23:44:17.784884  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.784892  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:17.784897  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:17.784953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:17.819838  171911 cri.go:89] found id: ""
	I0903 23:44:17.819867  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.819875  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:17.819881  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:17.819932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:17.853453  171911 cri.go:89] found id: ""
	I0903 23:44:17.853482  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.853489  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:17.853496  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:17.853553  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:17.887886  171911 cri.go:89] found id: ""
	I0903 23:44:17.887915  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.887923  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:17.887930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:17.887985  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:17.923140  171911 cri.go:89] found id: ""
	I0903 23:44:17.923172  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.923183  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:17.923190  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:17.923258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:17.957595  171911 cri.go:89] found id: ""
	I0903 23:44:17.957625  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.957638  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:17.957647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:17.957717  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:17.990247  171911 cri.go:89] found id: ""
	I0903 23:44:17.990276  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.990284  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:17.990290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:17.990362  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:18.024643  171911 cri.go:89] found id: ""
	I0903 23:44:18.024673  171911 logs.go:282] 0 containers: []
	W0903 23:44:18.024685  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:18.024697  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:18.024713  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:18.076397  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:18.076436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:18.090204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:18.090233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:18.163020  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:18.163044  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:18.163059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:18.240276  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:18.240314  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:20.781710  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:20.798871  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:20.798939  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:20.833834  171911 cri.go:89] found id: ""
	I0903 23:44:20.833867  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.833875  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:20.833881  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:20.833936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:20.868536  171911 cri.go:89] found id: ""
	I0903 23:44:20.868569  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.868577  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:20.868583  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:20.868639  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:20.902513  171911 cri.go:89] found id: ""
	I0903 23:44:20.902546  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.902557  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:20.902570  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:20.902644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:20.935967  171911 cri.go:89] found id: ""
	I0903 23:44:20.935994  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.936001  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:20.936007  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:20.936070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:20.969967  171911 cri.go:89] found id: ""
	I0903 23:44:20.969995  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.970003  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:20.970009  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:20.970067  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:21.005097  171911 cri.go:89] found id: ""
	I0903 23:44:21.005130  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.005149  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:21.005158  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:21.005231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:21.040315  171911 cri.go:89] found id: ""
	I0903 23:44:21.040350  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.040357  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:21.040364  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:21.040431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:21.075411  171911 cri.go:89] found id: ""
	I0903 23:44:21.075447  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.075456  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:21.075466  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:21.075478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:21.125281  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:21.125322  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:21.139605  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:21.139635  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:21.203960  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:21.203986  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:21.204004  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:21.278167  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:21.278211  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:23.820132  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:23.839119  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:23.839184  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:23.883827  171911 cri.go:89] found id: ""
	I0903 23:44:23.883864  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.883876  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:23.883884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:23.883943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:23.929729  171911 cri.go:89] found id: ""
	I0903 23:44:23.929756  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.929765  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:23.929771  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:23.929822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:23.962676  171911 cri.go:89] found id: ""
	I0903 23:44:23.962708  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.962716  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:23.962722  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:23.962778  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:23.995464  171911 cri.go:89] found id: ""
	I0903 23:44:23.995505  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.995516  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:23.995522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:23.995586  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:24.030690  171911 cri.go:89] found id: ""
	I0903 23:44:24.030718  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.030726  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:24.030733  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:24.030791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:24.064311  171911 cri.go:89] found id: ""
	I0903 23:44:24.064338  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.064346  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:24.064352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:24.064408  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:24.098888  171911 cri.go:89] found id: ""
	I0903 23:44:24.098917  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.098924  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:24.098930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:24.098990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:24.135030  171911 cri.go:89] found id: ""
	I0903 23:44:24.135057  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.135064  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:24.135074  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:24.135086  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:24.185228  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:24.185266  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:24.198908  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:24.198937  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:24.260291  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:24.260337  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:24.260355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:24.337581  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:24.337620  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:26.876959  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:26.893615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:26.893679  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:26.926745  171911 cri.go:89] found id: ""
	I0903 23:44:26.926776  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.926784  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:26.926791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:26.926848  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:26.959697  171911 cri.go:89] found id: ""
	I0903 23:44:26.959727  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.959735  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:26.959742  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:26.959795  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:26.991963  171911 cri.go:89] found id: ""
	I0903 23:44:26.991996  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.992004  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:26.992011  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:26.992064  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:27.025939  171911 cri.go:89] found id: ""
	I0903 23:44:27.025978  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.025989  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:27.025997  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:27.026065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:27.058572  171911 cri.go:89] found id: ""
	I0903 23:44:27.058598  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.058606  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:27.058612  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:27.058666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:27.092277  171911 cri.go:89] found id: ""
	I0903 23:44:27.092309  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.092318  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:27.092324  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:27.092385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:27.127742  171911 cri.go:89] found id: ""
	I0903 23:44:27.127777  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.127789  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:27.127798  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:27.127872  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:27.162425  171911 cri.go:89] found id: ""
	I0903 23:44:27.162463  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.162474  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:27.162487  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:27.162503  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:27.213126  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:27.213165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:27.226983  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:27.227013  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:27.293122  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:27.293152  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:27.293169  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:27.368497  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:27.368538  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:29.907183  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:29.924079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:29.924172  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:29.957813  171911 cri.go:89] found id: ""
	I0903 23:44:29.957843  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.957851  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:29.957857  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:29.957919  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:29.992782  171911 cri.go:89] found id: ""
	I0903 23:44:29.992812  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.992819  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:29.992826  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:29.992888  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:30.026629  171911 cri.go:89] found id: ""
	I0903 23:44:30.026664  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.026674  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:30.026682  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:30.026756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:30.060035  171911 cri.go:89] found id: ""
	I0903 23:44:30.060074  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.060083  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:30.060092  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:30.060154  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:30.101281  171911 cri.go:89] found id: ""
	I0903 23:44:30.101319  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.101330  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:30.101338  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:30.101419  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:30.146884  171911 cri.go:89] found id: ""
	I0903 23:44:30.146911  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.146918  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:30.146925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:30.146989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:30.180988  171911 cri.go:89] found id: ""
	I0903 23:44:30.181016  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.181024  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:30.181030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:30.181103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:30.214648  171911 cri.go:89] found id: ""
	I0903 23:44:30.214679  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.214687  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:30.214696  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:30.214709  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:30.262757  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:30.262799  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:30.283299  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:30.283331  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:30.366919  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:30.366945  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:30.366959  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:30.442612  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:30.442654  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:32.981733  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:32.999850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:32.999930  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:33.040618  171911 cri.go:89] found id: ""
	I0903 23:44:33.040653  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.040664  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:33.040671  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:33.040738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:33.081786  171911 cri.go:89] found id: ""
	I0903 23:44:33.081818  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.081829  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:33.081836  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:33.081906  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:33.125847  171911 cri.go:89] found id: ""
	I0903 23:44:33.125878  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.125888  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:33.125896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:33.125962  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:33.167437  171911 cri.go:89] found id: ""
	I0903 23:44:33.167465  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.167473  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:33.167481  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:33.167557  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:33.208145  171911 cri.go:89] found id: ""
	I0903 23:44:33.208177  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.208185  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:33.208192  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:33.208248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:33.250045  171911 cri.go:89] found id: ""
	I0903 23:44:33.250074  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.250081  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:33.250087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:33.250139  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:33.289576  171911 cri.go:89] found id: ""
	I0903 23:44:33.289607  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.289615  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:33.289621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:33.289676  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:33.325452  171911 cri.go:89] found id: ""
	I0903 23:44:33.325485  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.325493  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:33.325503  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:33.325515  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:33.403967  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:33.404018  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:33.441581  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:33.441619  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:33.488744  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:33.488794  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:33.502603  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:33.502648  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:33.567447  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.069781  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:36.093945  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:36.094023  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:36.138900  171911 cri.go:89] found id: ""
	I0903 23:44:36.138929  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.138940  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:36.138950  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:36.139016  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:36.174814  171911 cri.go:89] found id: ""
	I0903 23:44:36.174841  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.174849  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:36.174855  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:36.174918  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:36.211574  171911 cri.go:89] found id: ""
	I0903 23:44:36.211604  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.211611  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:36.211618  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:36.211670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:36.245780  171911 cri.go:89] found id: ""
	I0903 23:44:36.245812  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.245823  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:36.245830  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:36.245886  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:36.280576  171911 cri.go:89] found id: ""
	I0903 23:44:36.280606  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.280614  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:36.280620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:36.280674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:36.315469  171911 cri.go:89] found id: ""
	I0903 23:44:36.315504  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.315515  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:36.315524  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:36.315582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:36.349983  171911 cri.go:89] found id: ""
	I0903 23:44:36.350018  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.350027  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:36.350033  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:36.350083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:36.384827  171911 cri.go:89] found id: ""
	I0903 23:44:36.384857  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.384866  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:36.384877  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:36.384896  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:36.398999  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:36.399029  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:36.467458  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.467492  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:36.467507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:36.546881  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:36.546922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:36.584400  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:36.584437  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.135283  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:39.152700  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:39.152762  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:39.187286  171911 cri.go:89] found id: ""
	I0903 23:44:39.187333  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.187344  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:39.187351  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:39.187418  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:39.222904  171911 cri.go:89] found id: ""
	I0903 23:44:39.222932  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.222940  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:39.222946  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:39.223001  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:39.256820  171911 cri.go:89] found id: ""
	I0903 23:44:39.256849  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.256860  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:39.256867  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:39.256936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:39.290701  171911 cri.go:89] found id: ""
	I0903 23:44:39.290732  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.290742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:39.290748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:39.290814  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:39.325458  171911 cri.go:89] found id: ""
	I0903 23:44:39.325494  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.325505  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:39.325513  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:39.325577  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:39.358959  171911 cri.go:89] found id: ""
	I0903 23:44:39.358988  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.358996  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:39.359002  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:39.359070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:39.394031  171911 cri.go:89] found id: ""
	I0903 23:44:39.394058  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.394066  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:39.394072  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:39.394135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:39.428921  171911 cri.go:89] found id: ""
	I0903 23:44:39.428950  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.428961  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:39.428973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:39.428992  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.478303  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:39.478346  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:39.492136  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:39.492165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:39.556474  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:39.556499  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:39.556512  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:39.630384  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:39.630421  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:42.169783  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:42.186331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:42.186392  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:42.220630  171911 cri.go:89] found id: ""
	I0903 23:44:42.220658  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.220669  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:42.220678  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:42.220751  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:42.256274  171911 cri.go:89] found id: ""
	I0903 23:44:42.256310  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.256321  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:42.256329  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:42.256387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:42.289958  171911 cri.go:89] found id: ""
	I0903 23:44:42.289988  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.289998  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:42.290006  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:42.290065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:42.322425  171911 cri.go:89] found id: ""
	I0903 23:44:42.322453  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.322464  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:42.322473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:42.322537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:42.357459  171911 cri.go:89] found id: ""
	I0903 23:44:42.357494  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.357503  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:42.357509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:42.357588  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:42.390807  171911 cri.go:89] found id: ""
	I0903 23:44:42.390837  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.390845  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:42.390851  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:42.390924  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:42.424548  171911 cri.go:89] found id: ""
	I0903 23:44:42.424579  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.424590  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:42.424598  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:42.424667  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:42.459215  171911 cri.go:89] found id: ""
	I0903 23:44:42.459250  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.459261  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:42.459274  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:42.459290  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:42.505525  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:42.505560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:42.519712  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:42.519744  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:42.583576  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:42.583603  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:42.583618  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:42.660899  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:42.660936  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.200707  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:45.217299  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:45.217372  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:45.252045  171911 cri.go:89] found id: ""
	I0903 23:44:45.252073  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.252081  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:45.252087  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:45.252155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:45.287247  171911 cri.go:89] found id: ""
	I0903 23:44:45.287281  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.287289  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:45.287296  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:45.287353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:45.320423  171911 cri.go:89] found id: ""
	I0903 23:44:45.320450  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.320457  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:45.320463  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:45.320517  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:45.353147  171911 cri.go:89] found id: ""
	I0903 23:44:45.353179  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.353187  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:45.353193  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:45.353261  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:45.387052  171911 cri.go:89] found id: ""
	I0903 23:44:45.387080  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.387089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:45.387096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:45.387151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:45.422621  171911 cri.go:89] found id: ""
	I0903 23:44:45.422651  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.422659  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:45.422666  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:45.422734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:45.457224  171911 cri.go:89] found id: ""
	I0903 23:44:45.457258  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.457266  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:45.457274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:45.457339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:45.490659  171911 cri.go:89] found id: ""
	I0903 23:44:45.490685  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.490693  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:45.490706  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:45.490729  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:45.556871  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:45.556894  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:45.556909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:45.628062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:45.628101  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.666937  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:45.666977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:45.713545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:45.713580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.227552  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:48.245044  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:48.245118  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:48.279490  171911 cri.go:89] found id: ""
	I0903 23:44:48.279519  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.279529  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:48.279537  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:48.279621  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:48.313971  171911 cri.go:89] found id: ""
	I0903 23:44:48.313998  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.314006  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:48.314012  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:48.314076  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:48.349729  171911 cri.go:89] found id: ""
	I0903 23:44:48.349765  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.349773  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:48.349779  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:48.349843  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:48.384104  171911 cri.go:89] found id: ""
	I0903 23:44:48.384132  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.384140  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:48.384147  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:48.384210  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:48.418534  171911 cri.go:89] found id: ""
	I0903 23:44:48.418569  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.418581  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:48.418589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:48.418656  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:48.452604  171911 cri.go:89] found id: ""
	I0903 23:44:48.452632  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.452640  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:48.452647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:48.452711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:48.485587  171911 cri.go:89] found id: ""
	I0903 23:44:48.485618  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.485629  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:48.485636  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:48.485701  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:48.518840  171911 cri.go:89] found id: ""
	I0903 23:44:48.518865  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.518876  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:48.518890  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:48.518906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:48.566332  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:48.566368  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.580074  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:48.580103  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:48.646139  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:48.646163  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:48.646177  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:48.721508  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:48.721551  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.261729  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:51.277615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:51.277688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:51.311728  171911 cri.go:89] found id: ""
	I0903 23:44:51.311758  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.311767  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:51.311773  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:51.311841  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:51.346364  171911 cri.go:89] found id: ""
	I0903 23:44:51.346394  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.346402  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:51.346408  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:51.346467  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:51.380196  171911 cri.go:89] found id: ""
	I0903 23:44:51.380233  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.380249  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:51.380259  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:51.380331  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:51.414829  171911 cri.go:89] found id: ""
	I0903 23:44:51.414861  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.414869  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:51.414875  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:51.414943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:51.448741  171911 cri.go:89] found id: ""
	I0903 23:44:51.448779  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.448792  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:51.448801  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:51.448865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:51.484499  171911 cri.go:89] found id: ""
	I0903 23:44:51.484537  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.484545  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:51.484552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:51.484605  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:51.518538  171911 cri.go:89] found id: ""
	I0903 23:44:51.518568  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.518580  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:51.518589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:51.518649  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:51.560124  171911 cri.go:89] found id: ""
	I0903 23:44:51.560158  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.560168  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:51.560193  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:51.560207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:51.636716  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:51.636760  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.674322  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:51.674355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:51.723819  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:51.723856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:51.737446  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:51.737478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:51.800575  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.300746  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:54.317060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:54.317135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:54.350356  171911 cri.go:89] found id: ""
	I0903 23:44:54.350382  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.350389  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:54.350396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:54.350458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:54.386548  171911 cri.go:89] found id: ""
	I0903 23:44:54.386577  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.386586  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:54.386593  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:54.386647  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:54.423360  171911 cri.go:89] found id: ""
	I0903 23:44:54.423388  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.423395  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:54.423407  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:54.423458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:54.458673  171911 cri.go:89] found id: ""
	I0903 23:44:54.458701  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.458709  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:54.458716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:54.458781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:54.491692  171911 cri.go:89] found id: ""
	I0903 23:44:54.491726  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.491738  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:54.491746  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:54.491809  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:54.524500  171911 cri.go:89] found id: ""
	I0903 23:44:54.524530  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.524543  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:54.524550  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:54.524614  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:54.558644  171911 cri.go:89] found id: ""
	I0903 23:44:54.558676  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.558688  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:54.558696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:54.558773  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:54.592814  171911 cri.go:89] found id: ""
	I0903 23:44:54.592841  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.592851  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:54.592863  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:54.592879  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:54.642538  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:54.642572  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:54.656435  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:54.656468  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:54.721260  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.721286  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:54.721304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:54.798283  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:54.798323  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:57.337294  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:57.353760  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:57.353842  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:57.387108  171911 cri.go:89] found id: ""
	I0903 23:44:57.387136  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.387146  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:57.387153  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:57.387219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:57.421245  171911 cri.go:89] found id: ""
	I0903 23:44:57.421273  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.421283  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:57.421291  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:57.421367  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:57.455403  171911 cri.go:89] found id: ""
	I0903 23:44:57.455431  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.455441  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:57.455450  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:57.455510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:57.487825  171911 cri.go:89] found id: ""
	I0903 23:44:57.487860  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.487871  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:57.487880  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:57.487935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:57.522048  171911 cri.go:89] found id: ""
	I0903 23:44:57.522073  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.522081  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:57.522087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:57.522140  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:57.555520  171911 cri.go:89] found id: ""
	I0903 23:44:57.555545  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.555553  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:57.555560  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:57.555622  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:57.588895  171911 cri.go:89] found id: ""
	I0903 23:44:57.588924  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.588933  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:57.588941  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:57.589002  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:57.623152  171911 cri.go:89] found id: ""
	I0903 23:44:57.623190  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.623198  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:57.623207  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:57.623217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:57.672898  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:57.672938  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:57.686578  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:57.686611  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:57.750436  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:57.750467  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:57.750485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:57.830779  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:57.830829  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.371014  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:00.387297  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:00.387414  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:00.420632  171911 cri.go:89] found id: ""
	I0903 23:45:00.420662  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.420670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:00.420676  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:00.420729  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:00.453824  171911 cri.go:89] found id: ""
	I0903 23:45:00.453852  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.453860  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:00.453866  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:00.453917  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:00.488618  171911 cri.go:89] found id: ""
	I0903 23:45:00.488650  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.488661  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:00.488669  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:00.488738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:00.522545  171911 cri.go:89] found id: ""
	I0903 23:45:00.522579  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.522587  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:00.522595  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:00.522655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:00.555419  171911 cri.go:89] found id: ""
	I0903 23:45:00.555445  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.555453  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:00.555459  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:00.555515  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:00.588742  171911 cri.go:89] found id: ""
	I0903 23:45:00.588777  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.588790  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:00.588799  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:00.588876  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:00.621164  171911 cri.go:89] found id: ""
	I0903 23:45:00.621194  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.621205  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:00.621212  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:00.621287  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:00.652140  171911 cri.go:89] found id: ""
	I0903 23:45:00.652167  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.652178  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:00.652191  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:00.652206  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:00.733518  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:00.733560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.770455  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:00.770489  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:00.819129  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:00.819161  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:00.832460  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:00.832492  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:00.895930  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.397643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:03.414370  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:03.414441  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:03.448753  171911 cri.go:89] found id: ""
	I0903 23:45:03.448787  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.448795  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:03.448802  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:03.448860  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:03.484668  171911 cri.go:89] found id: ""
	I0903 23:45:03.484696  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.484703  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:03.484709  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:03.484763  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:03.517157  171911 cri.go:89] found id: ""
	I0903 23:45:03.517184  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.517191  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:03.517197  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:03.517250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:03.552220  171911 cri.go:89] found id: ""
	I0903 23:45:03.552246  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.552255  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:03.552262  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:03.552328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:03.585731  171911 cri.go:89] found id: ""
	I0903 23:45:03.585764  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.585774  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:03.585783  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:03.585854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:03.619396  171911 cri.go:89] found id: ""
	I0903 23:45:03.619425  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.619433  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:03.619439  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:03.619503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:03.653461  171911 cri.go:89] found id: ""
	I0903 23:45:03.653489  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.653500  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:03.653509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:03.653562  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:03.690075  171911 cri.go:89] found id: ""
	I0903 23:45:03.690102  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.690112  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:03.690123  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:03.690139  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:03.742271  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:03.742305  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:03.755513  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:03.755548  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:03.817702  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.817734  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:03.817758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:03.894336  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:03.894377  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:06.433897  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:06.450322  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:06.450386  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:06.482782  171911 cri.go:89] found id: ""
	I0903 23:45:06.482810  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.482818  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:06.482824  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:06.482878  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:06.516065  171911 cri.go:89] found id: ""
	I0903 23:45:06.516098  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.516106  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:06.516112  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:06.516164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:06.548668  171911 cri.go:89] found id: ""
	I0903 23:45:06.548695  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.548703  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:06.548710  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:06.548765  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:06.580287  171911 cri.go:89] found id: ""
	I0903 23:45:06.580316  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.580324  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:06.580331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:06.580385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:06.613698  171911 cri.go:89] found id: ""
	I0903 23:45:06.613728  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.613736  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:06.613742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:06.613798  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:06.648492  171911 cri.go:89] found id: ""
	I0903 23:45:06.648520  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.648531  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:06.648539  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:06.648591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:06.682079  171911 cri.go:89] found id: ""
	I0903 23:45:06.682105  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.682114  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:06.682123  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:06.682182  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:06.717523  171911 cri.go:89] found id: ""
	I0903 23:45:06.717551  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.717559  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:06.717568  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:06.717580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:06.766524  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:06.766557  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:06.779931  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:06.779960  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:06.843183  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:06.843204  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:06.843217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:06.919233  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:06.919270  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.456643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:09.475777  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:09.475855  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:09.516030  171911 cri.go:89] found id: ""
	I0903 23:45:09.516066  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.516078  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:09.516086  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:09.516155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:09.556025  171911 cri.go:89] found id: ""
	I0903 23:45:09.556058  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.556071  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:09.556080  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:09.556145  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:09.596343  171911 cri.go:89] found id: ""
	I0903 23:45:09.596375  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.596384  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:09.596393  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:09.596456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:09.634286  171911 cri.go:89] found id: ""
	I0903 23:45:09.634323  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.634330  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:09.634336  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:09.634387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:09.667579  171911 cri.go:89] found id: ""
	I0903 23:45:09.667617  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.667629  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:09.667637  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:09.667709  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:09.702631  171911 cri.go:89] found id: ""
	I0903 23:45:09.702661  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.702670  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:09.702677  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:09.702744  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:09.736481  171911 cri.go:89] found id: ""
	I0903 23:45:09.736513  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.736522  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:09.736528  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:09.736594  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:09.768392  171911 cri.go:89] found id: ""
	I0903 23:45:09.768420  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.768428  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:09.768438  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:09.768454  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.804233  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:09.804262  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:09.854916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:09.854951  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:09.868290  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:09.868326  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:09.937659  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:09.937686  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:09.937702  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:12.515352  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:12.532069  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:12.532138  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:12.566307  171911 cri.go:89] found id: ""
	I0903 23:45:12.566347  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.566356  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:12.566361  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:12.566413  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:12.600883  171911 cri.go:89] found id: ""
	I0903 23:45:12.600911  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.600919  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:12.600925  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:12.600976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:12.634831  171911 cri.go:89] found id: ""
	I0903 23:45:12.634860  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.634868  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:12.634874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:12.634932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:12.668965  171911 cri.go:89] found id: ""
	I0903 23:45:12.668993  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.669002  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:12.669008  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:12.669061  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:12.702632  171911 cri.go:89] found id: ""
	I0903 23:45:12.702662  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.702670  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:12.702676  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:12.702734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:12.736957  171911 cri.go:89] found id: ""
	I0903 23:45:12.736994  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.737005  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:12.737013  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:12.737096  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:12.769324  171911 cri.go:89] found id: ""
	I0903 23:45:12.769353  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.769361  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:12.769367  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:12.769433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:12.801706  171911 cri.go:89] found id: ""
	I0903 23:45:12.801731  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.801738  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:12.801747  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:12.801758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:12.850449  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:12.850485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:12.864235  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:12.864263  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:12.928347  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:12.928372  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:12.928385  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:13.002530  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:13.002569  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:15.541753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:15.558031  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:15.558098  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:15.590544  171911 cri.go:89] found id: ""
	I0903 23:45:15.590590  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.590608  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:15.590618  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:15.590681  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:15.623172  171911 cri.go:89] found id: ""
	I0903 23:45:15.623206  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.623214  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:15.623220  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:15.623271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:15.666374  171911 cri.go:89] found id: ""
	I0903 23:45:15.666413  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.666424  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:15.666432  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:15.666500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:15.700153  171911 cri.go:89] found id: ""
	I0903 23:45:15.700188  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.700196  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:15.700203  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:15.700258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:15.734346  171911 cri.go:89] found id: ""
	I0903 23:45:15.734379  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.734391  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:15.734401  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:15.734468  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:15.768125  171911 cri.go:89] found id: ""
	I0903 23:45:15.768151  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.768160  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:15.768166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:15.768219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:15.802055  171911 cri.go:89] found id: ""
	I0903 23:45:15.802085  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.802093  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:15.802101  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:15.802155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:15.835742  171911 cri.go:89] found id: ""
	I0903 23:45:15.835775  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.835785  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:15.835796  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:15.835809  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:15.887302  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:15.887339  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:15.900589  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:15.900616  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:15.963821  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:15.963850  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:15.963867  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:16.041873  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:16.041910  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:18.579975  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:18.596552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:18.596644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:18.637122  171911 cri.go:89] found id: ""
	I0903 23:45:18.637150  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.637159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:18.637168  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:18.637231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:18.683926  171911 cri.go:89] found id: ""
	I0903 23:45:18.683965  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.683976  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:18.683984  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:18.684143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:18.724297  171911 cri.go:89] found id: ""
	I0903 23:45:18.724326  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.724337  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:18.724356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:18.724424  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:18.767543  171911 cri.go:89] found id: ""
	I0903 23:45:18.767585  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.767594  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:18.767601  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:18.767666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:18.808984  171911 cri.go:89] found id: ""
	I0903 23:45:18.809023  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.809034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:18.809042  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:18.809125  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:18.843616  171911 cri.go:89] found id: ""
	I0903 23:45:18.843651  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.843662  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:18.843670  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:18.843772  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:18.878089  171911 cri.go:89] found id: ""
	I0903 23:45:18.878117  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.878125  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:18.878131  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:18.878199  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:18.913557  171911 cri.go:89] found id: ""
	I0903 23:45:18.913590  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.913602  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:18.913613  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:18.913629  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:18.964473  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:18.964511  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:18.977841  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:18.977868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:19.041151  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:19.041175  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:19.041190  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:19.114112  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:19.114166  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:21.655099  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:21.671751  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:21.671826  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:21.705950  171911 cri.go:89] found id: ""
	I0903 23:45:21.705985  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.705993  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:21.706000  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:21.706066  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:21.745098  171911 cri.go:89] found id: ""
	I0903 23:45:21.745125  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.745134  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:21.745139  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:21.745212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:21.787214  171911 cri.go:89] found id: ""
	I0903 23:45:21.787246  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.787259  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:21.787267  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:21.787340  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:21.825966  171911 cri.go:89] found id: ""
	I0903 23:45:21.825999  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.826009  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:21.826023  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:21.826094  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:21.858874  171911 cri.go:89] found id: ""
	I0903 23:45:21.858909  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.858920  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:21.858928  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:21.858990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:21.892820  171911 cri.go:89] found id: ""
	I0903 23:45:21.892851  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.892862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:21.892869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:21.892938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:21.927139  171911 cri.go:89] found id: ""
	I0903 23:45:21.927167  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.927174  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:21.927180  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:21.927242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:21.961202  171911 cri.go:89] found id: ""
	I0903 23:45:21.961235  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.961247  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:21.961259  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:21.961274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:22.034253  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:22.034307  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:22.081973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:22.082014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:22.136441  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:22.136507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:22.153988  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:22.154027  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:22.218718  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:24.718932  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:24.735304  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:24.735366  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:24.769484  171911 cri.go:89] found id: ""
	I0903 23:45:24.769526  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.769534  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:24.769541  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:24.769602  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:24.804478  171911 cri.go:89] found id: ""
	I0903 23:45:24.804512  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.804523  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:24.804531  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:24.804616  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:24.839941  171911 cri.go:89] found id: ""
	I0903 23:45:24.839967  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.839974  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:24.839980  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:24.840043  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:24.872589  171911 cri.go:89] found id: ""
	I0903 23:45:24.872631  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.872641  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:24.872650  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:24.872713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:24.906281  171911 cri.go:89] found id: ""
	I0903 23:45:24.906312  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.906321  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:24.906327  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:24.906381  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:24.940855  171911 cri.go:89] found id: ""
	I0903 23:45:24.940891  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.940902  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:24.940910  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:24.940979  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:24.973046  171911 cri.go:89] found id: ""
	I0903 23:45:24.973075  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.973084  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:24.973091  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:24.973160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:25.006986  171911 cri.go:89] found id: ""
	I0903 23:45:25.007015  171911 logs.go:282] 0 containers: []
	W0903 23:45:25.007026  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:25.007038  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:25.007054  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:25.057037  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:25.057075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:25.070713  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:25.070741  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:25.135104  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:25.135129  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:25.135142  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:25.211776  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:25.211816  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:27.750263  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:27.766962  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:27.767039  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:27.809102  171911 cri.go:89] found id: ""
	I0903 23:45:27.809134  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.809142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:27.809149  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:27.809201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:27.852918  171911 cri.go:89] found id: ""
	I0903 23:45:27.852946  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.852954  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:27.852961  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:27.853025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:27.908523  171911 cri.go:89] found id: ""
	I0903 23:45:27.908554  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.908561  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:27.908566  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:27.908627  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:27.941105  171911 cri.go:89] found id: ""
	I0903 23:45:27.941136  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.941144  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:27.941150  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:27.941204  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:27.974030  171911 cri.go:89] found id: ""
	I0903 23:45:27.974064  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.974075  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:27.974082  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:27.974149  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:28.007829  171911 cri.go:89] found id: ""
	I0903 23:45:28.007857  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.007867  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:28.007874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:28.007936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:28.050575  171911 cri.go:89] found id: ""
	I0903 23:45:28.050614  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.050622  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:28.050629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:28.050684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:28.085777  171911 cri.go:89] found id: ""
	I0903 23:45:28.085809  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.085817  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:28.085826  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:28.085838  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:28.150751  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:28.150778  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:28.150792  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:28.223955  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:28.224000  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:28.262972  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:28.262999  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:28.311545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:28.311580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:30.827970  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:30.844742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:30.844805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:30.880412  171911 cri.go:89] found id: ""
	I0903 23:45:30.880453  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.880468  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:30.880476  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:30.880549  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:30.913830  171911 cri.go:89] found id: ""
	I0903 23:45:30.913858  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.913867  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:30.913872  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:30.913935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:30.946611  171911 cri.go:89] found id: ""
	I0903 23:45:30.946641  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.946650  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:30.946656  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:30.946711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:30.980152  171911 cri.go:89] found id: ""
	I0903 23:45:30.980183  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.980193  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:30.980201  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:30.980271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:31.015814  171911 cri.go:89] found id: ""
	I0903 23:45:31.015845  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.015856  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:31.015863  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:31.015932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:31.050513  171911 cri.go:89] found id: ""
	I0903 23:45:31.050543  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.050555  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:31.050562  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:31.050636  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:31.083766  171911 cri.go:89] found id: ""
	I0903 23:45:31.083791  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.083798  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:31.083805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:31.083864  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:31.117858  171911 cri.go:89] found id: ""
	I0903 23:45:31.117886  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.117893  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:31.117903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:31.117922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:31.131404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:31.131433  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:31.195245  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:31.195275  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:31.195295  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:31.271630  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:31.271671  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:31.310746  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:31.310780  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:33.861848  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:33.878672  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:33.878742  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:33.911344  171911 cri.go:89] found id: ""
	I0903 23:45:33.911377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.911388  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:33.911396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:33.911458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:33.948348  171911 cri.go:89] found id: ""
	I0903 23:45:33.948377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.948385  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:33.948391  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:33.948455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:33.981680  171911 cri.go:89] found id: ""
	I0903 23:45:33.981710  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.981722  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:33.981730  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:33.981796  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:34.013721  171911 cri.go:89] found id: ""
	I0903 23:45:34.013747  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.013755  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:34.013762  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:34.013827  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:34.047612  171911 cri.go:89] found id: ""
	I0903 23:45:34.047644  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.047654  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:34.047661  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:34.047720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:34.081680  171911 cri.go:89] found id: ""
	I0903 23:45:34.081714  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.081725  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:34.081734  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:34.081802  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:34.117208  171911 cri.go:89] found id: ""
	I0903 23:45:34.117247  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.117258  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:34.117268  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:34.117339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:34.150598  171911 cri.go:89] found id: ""
	I0903 23:45:34.150626  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.150634  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:34.150644  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:34.150655  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:34.199612  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:34.199652  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:34.213484  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:34.213513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:34.276337  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:34.276358  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:34.276380  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:34.347780  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:34.347822  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:36.885583  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:36.902360  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:36.902439  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:36.936103  171911 cri.go:89] found id: ""
	I0903 23:45:36.936133  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.936142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:36.936148  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:36.936212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:36.969146  171911 cri.go:89] found id: ""
	I0903 23:45:36.969173  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.969180  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:36.969186  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:36.969248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:37.002284  171911 cri.go:89] found id: ""
	I0903 23:45:37.002314  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.002324  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:37.002331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:37.002385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:37.034701  171911 cri.go:89] found id: ""
	I0903 23:45:37.034731  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.034741  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:37.034749  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:37.034815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:37.067766  171911 cri.go:89] found id: ""
	I0903 23:45:37.067798  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.067810  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:37.067819  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:37.067887  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:37.100402  171911 cri.go:89] found id: ""
	I0903 23:45:37.100431  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.100439  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:37.100445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:37.100495  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:37.134783  171911 cri.go:89] found id: ""
	I0903 23:45:37.134814  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.134822  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:37.134828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:37.134892  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:37.168715  171911 cri.go:89] found id: ""
	I0903 23:45:37.168746  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.168753  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:37.168768  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:37.168781  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:37.239216  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:37.239259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:37.278941  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:37.278977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:37.327168  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:37.327207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:37.340806  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:37.340837  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:37.402460  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:39.902717  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:39.919140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:39.919211  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:39.952379  171911 cri.go:89] found id: ""
	I0903 23:45:39.952407  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.952421  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:39.952428  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:39.952510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:39.986646  171911 cri.go:89] found id: ""
	I0903 23:45:39.986674  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.986682  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:39.986688  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:39.986750  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:40.019946  171911 cri.go:89] found id: ""
	I0903 23:45:40.019984  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.019995  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:40.020004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:40.020075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:40.051084  171911 cri.go:89] found id: ""
	I0903 23:45:40.051120  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.051131  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:40.051139  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:40.051198  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:40.084431  171911 cri.go:89] found id: ""
	I0903 23:45:40.084471  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.084485  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:40.084493  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:40.084590  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:40.117261  171911 cri.go:89] found id: ""
	I0903 23:45:40.117289  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.117298  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:40.117305  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:40.117356  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:40.149940  171911 cri.go:89] found id: ""
	I0903 23:45:40.149976  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.149983  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:40.149989  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:40.150049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:40.185787  171911 cri.go:89] found id: ""
	I0903 23:45:40.185819  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.185828  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:40.185838  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:40.185849  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:40.236114  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:40.236151  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:40.249810  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:40.249842  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:40.315354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:40.315385  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:40.315402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:40.391973  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:40.392014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:42.929523  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:42.946789  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:42.946852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:42.981168  171911 cri.go:89] found id: ""
	I0903 23:45:42.981202  171911 logs.go:282] 0 containers: []
	W0903 23:45:42.981214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:42.981223  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:42.981290  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:43.016160  171911 cri.go:89] found id: ""
	I0903 23:45:43.016191  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.016202  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:43.016210  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:43.016277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:43.052374  171911 cri.go:89] found id: ""
	I0903 23:45:43.052407  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.052415  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:43.052421  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:43.052490  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:43.087466  171911 cri.go:89] found id: ""
	I0903 23:45:43.087492  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.087499  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:43.087506  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:43.087578  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:43.121733  171911 cri.go:89] found id: ""
	I0903 23:45:43.121770  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.121780  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:43.121786  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:43.121852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:43.155089  171911 cri.go:89] found id: ""
	I0903 23:45:43.155120  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.155129  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:43.155136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:43.155208  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:43.187081  171911 cri.go:89] found id: ""
	I0903 23:45:43.187113  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.187124  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:43.187132  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:43.187206  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:43.221988  171911 cri.go:89] found id: ""
	I0903 23:45:43.222020  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.222027  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:43.222037  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:43.222048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:43.274015  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:43.274053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:43.288204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:43.288237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:43.352172  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:43.352197  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:43.352214  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:43.429363  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:43.429416  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:45.967138  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:45.984430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:45.984508  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:46.018620  171911 cri.go:89] found id: ""
	I0903 23:45:46.018656  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.018670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:46.018680  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:46.018736  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:46.052857  171911 cri.go:89] found id: ""
	I0903 23:45:46.052896  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.052908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:46.052917  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:46.052992  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:46.086760  171911 cri.go:89] found id: ""
	I0903 23:45:46.086802  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.086815  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:46.086824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:46.086897  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:46.122770  171911 cri.go:89] found id: ""
	I0903 23:45:46.122808  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.122821  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:46.122831  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:46.122898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:46.156632  171911 cri.go:89] found id: ""
	I0903 23:45:46.156666  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.156677  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:46.156684  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:46.156748  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:46.189167  171911 cri.go:89] found id: ""
	I0903 23:45:46.189196  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.189204  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:46.189211  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:46.189281  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:46.221676  171911 cri.go:89] found id: ""
	I0903 23:45:46.221703  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.221710  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:46.221716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:46.221781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:46.255950  171911 cri.go:89] found id: ""
	I0903 23:45:46.255989  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.256001  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:46.256012  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:46.256026  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:46.320856  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:46.320887  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:46.320904  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:46.395448  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:46.395495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:46.433348  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:46.433402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:46.483558  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:46.483600  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:48.997604  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:49.014515  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:49.014584  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:49.049009  171911 cri.go:89] found id: ""
	I0903 23:45:49.049041  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.049049  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:49.049055  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:49.049107  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:49.082752  171911 cri.go:89] found id: ""
	I0903 23:45:49.082784  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.082792  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:49.082799  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:49.082853  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:49.117820  171911 cri.go:89] found id: ""
	I0903 23:45:49.117851  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.117861  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:49.117869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:49.117937  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:49.152630  171911 cri.go:89] found id: ""
	I0903 23:45:49.152662  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.152673  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:49.152681  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:49.152746  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:49.186660  171911 cri.go:89] found id: ""
	I0903 23:45:49.186693  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.186705  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:49.186715  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:49.186787  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:49.221850  171911 cri.go:89] found id: ""
	I0903 23:45:49.221879  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.221887  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:49.221894  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:49.221947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:49.256272  171911 cri.go:89] found id: ""
	I0903 23:45:49.256301  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.256309  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:49.256315  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:49.256378  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:49.292385  171911 cri.go:89] found id: ""
	I0903 23:45:49.292414  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.292422  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:49.292432  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:49.292446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:49.343070  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:49.343109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:49.356910  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:49.356940  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:49.423437  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:49.423471  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:49.423486  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:49.494062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:49.494108  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.034573  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:52.051154  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:52.051217  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:52.088178  171911 cri.go:89] found id: ""
	I0903 23:45:52.088205  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.088214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:52.088222  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:52.088284  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:52.122560  171911 cri.go:89] found id: ""
	I0903 23:45:52.122595  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.122606  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:52.122617  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:52.122687  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:52.154593  171911 cri.go:89] found id: ""
	I0903 23:45:52.154628  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.154636  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:52.154646  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:52.154700  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:52.188028  171911 cri.go:89] found id: ""
	I0903 23:45:52.188066  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.188079  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:52.188088  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:52.188162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:52.223140  171911 cri.go:89] found id: ""
	I0903 23:45:52.223165  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.223172  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:52.223178  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:52.223231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:52.267817  171911 cri.go:89] found id: ""
	I0903 23:45:52.267851  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.267862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:52.267869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:52.267936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:52.302187  171911 cri.go:89] found id: ""
	I0903 23:45:52.302224  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.302236  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:52.302245  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:52.302315  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:52.336716  171911 cri.go:89] found id: ""
	I0903 23:45:52.336742  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.336750  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:52.336761  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:52.336776  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.376759  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:52.376793  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:52.424230  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:52.424274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:52.438819  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:52.438850  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:52.505537  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:52.505562  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:52.505577  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.082568  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:55.100018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:55.100095  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:55.135160  171911 cri.go:89] found id: ""
	I0903 23:45:55.135189  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.135201  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:55.135210  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:55.135268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:55.175763  171911 cri.go:89] found id: ""
	I0903 23:45:55.175800  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.175808  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:55.175814  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:55.175875  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:55.209987  171911 cri.go:89] found id: ""
	I0903 23:45:55.210015  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.210024  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:55.210030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:55.210090  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:55.244587  171911 cri.go:89] found id: ""
	I0903 23:45:55.244615  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.244623  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:55.244630  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:55.244699  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:55.279333  171911 cri.go:89] found id: ""
	I0903 23:45:55.279363  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.279373  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:55.279381  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:55.279451  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:55.313220  171911 cri.go:89] found id: ""
	I0903 23:45:55.313263  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.313273  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:55.313281  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:55.313355  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:55.348181  171911 cri.go:89] found id: ""
	I0903 23:45:55.348215  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.348224  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:55.348230  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:55.348299  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:55.381456  171911 cri.go:89] found id: ""
	I0903 23:45:55.381482  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.381490  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:55.381500  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:55.381516  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:55.433817  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:55.433856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:55.447772  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:55.447804  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:55.513762  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:55.513795  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:55.513812  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.585576  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:55.585615  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.125483  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:58.142430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:58.142505  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:58.177668  171911 cri.go:89] found id: ""
	I0903 23:45:58.177697  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.177709  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:58.177717  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:58.177791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:58.212662  171911 cri.go:89] found id: ""
	I0903 23:45:58.212688  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.212697  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:58.212705  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:58.212766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:58.248588  171911 cri.go:89] found id: ""
	I0903 23:45:58.248616  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.248623  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:58.248629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:58.248684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:58.283427  171911 cri.go:89] found id: ""
	I0903 23:45:58.283459  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.283468  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:58.283475  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:58.283537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:58.319164  171911 cri.go:89] found id: ""
	I0903 23:45:58.319195  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.319203  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:58.319209  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:58.319265  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:58.354722  171911 cri.go:89] found id: ""
	I0903 23:45:58.354750  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.354758  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:58.354764  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:58.354816  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:58.389144  171911 cri.go:89] found id: ""
	I0903 23:45:58.389171  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.389181  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:58.389187  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:58.389240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:58.423096  171911 cri.go:89] found id: ""
	I0903 23:45:58.423125  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.423134  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:58.423144  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:58.423158  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:58.500171  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:58.500208  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.538635  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:58.538663  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:58.584846  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:58.584882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:58.598653  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:58.598685  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:58.666401  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:01.168834  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:01.185866  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:01.185953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:01.219970  171911 cri.go:89] found id: ""
	I0903 23:46:01.219998  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.220006  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:01.220012  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:01.220075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:01.253640  171911 cri.go:89] found id: ""
	I0903 23:46:01.253673  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.253683  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:01.253691  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:01.253756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:01.288533  171911 cri.go:89] found id: ""
	I0903 23:46:01.288564  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.288576  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:01.288584  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:01.288655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:01.323184  171911 cri.go:89] found id: ""
	I0903 23:46:01.323217  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.323226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:01.323232  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:01.323289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:01.356988  171911 cri.go:89] found id: ""
	I0903 23:46:01.357023  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.357034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:01.357045  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:01.357106  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:01.390140  171911 cri.go:89] found id: ""
	I0903 23:46:01.390168  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.390176  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:01.390182  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:01.390247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:01.423178  171911 cri.go:89] found id: ""
	I0903 23:46:01.423207  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.423215  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:01.423222  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:01.423285  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:01.461100  171911 cri.go:89] found id: ""
	I0903 23:46:01.461138  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.461148  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:01.461160  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:01.461185  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:01.535231  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:01.535274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:01.574120  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:01.574154  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:01.621782  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:01.621817  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:01.642205  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:01.642246  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:01.707505  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.207758  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:04.225090  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:04.225162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:04.259542  171911 cri.go:89] found id: ""
	I0903 23:46:04.259573  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.259580  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:04.259586  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:04.259638  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:04.294395  171911 cri.go:89] found id: ""
	I0903 23:46:04.294422  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.294430  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:04.294436  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:04.294488  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:04.329086  171911 cri.go:89] found id: ""
	I0903 23:46:04.329125  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.329134  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:04.329140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:04.329194  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:04.362247  171911 cri.go:89] found id: ""
	I0903 23:46:04.362278  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.362286  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:04.362292  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:04.362348  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:04.397700  171911 cri.go:89] found id: ""
	I0903 23:46:04.397731  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.397739  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:04.397745  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:04.397800  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:04.431332  171911 cri.go:89] found id: ""
	I0903 23:46:04.431360  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.431368  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:04.431374  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:04.431425  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:04.465005  171911 cri.go:89] found id: ""
	I0903 23:46:04.465035  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.465042  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:04.465049  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:04.465108  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:04.500441  171911 cri.go:89] found id: ""
	I0903 23:46:04.500470  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.500478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:04.500487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:04.500505  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:04.538356  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:04.538389  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:04.585363  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:04.585412  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:04.602519  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:04.602553  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:04.676451  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.676474  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:04.676488  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.260862  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:07.278149  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:07.278214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:07.320356  171911 cri.go:89] found id: ""
	I0903 23:46:07.320393  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.320405  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:07.320412  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:07.320498  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:07.355032  171911 cri.go:89] found id: ""
	I0903 23:46:07.355063  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.355074  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:07.355090  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:07.355155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:07.391094  171911 cri.go:89] found id: ""
	I0903 23:46:07.391119  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.391129  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:07.391136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:07.391195  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:07.431946  171911 cri.go:89] found id: ""
	I0903 23:46:07.431979  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.431988  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:07.431994  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:07.432049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:07.470935  171911 cri.go:89] found id: ""
	I0903 23:46:07.470965  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.470974  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:07.470981  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:07.471035  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:07.507140  171911 cri.go:89] found id: ""
	I0903 23:46:07.507171  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.507179  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:07.507185  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:07.507243  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:07.542978  171911 cri.go:89] found id: ""
	I0903 23:46:07.543007  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.543014  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:07.543022  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:07.543083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:07.578836  171911 cri.go:89] found id: ""
	I0903 23:46:07.578867  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.578875  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:07.578885  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:07.578911  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:07.625808  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:07.625852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:07.639685  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:07.639719  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:07.705947  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:07.705975  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:07.705994  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.782360  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:07.782406  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:10.331295  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:10.348405  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:10.348479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:10.381149  171911 cri.go:89] found id: ""
	I0903 23:46:10.381178  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.381185  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:10.381192  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:10.381254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:10.414056  171911 cri.go:89] found id: ""
	I0903 23:46:10.414096  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.414108  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:10.414117  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:10.414174  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:10.449437  171911 cri.go:89] found id: ""
	I0903 23:46:10.449467  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.449478  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:10.449485  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:10.449568  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:10.485019  171911 cri.go:89] found id: ""
	I0903 23:46:10.485047  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.485058  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:10.485064  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:10.485115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:10.517909  171911 cri.go:89] found id: ""
	I0903 23:46:10.517943  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.517955  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:10.517963  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:10.518037  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:10.551948  171911 cri.go:89] found id: ""
	I0903 23:46:10.551976  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.551984  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:10.551990  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:10.552053  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:10.586008  171911 cri.go:89] found id: ""
	I0903 23:46:10.586042  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.586052  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:10.586060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:10.586130  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:10.621028  171911 cri.go:89] found id: ""
	I0903 23:46:10.621054  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.621062  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:10.621073  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:10.621122  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:10.670328  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:10.670367  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:10.684168  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:10.684196  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:10.750643  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:10.750664  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:10.750678  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:10.824493  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:10.824545  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.375299  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:13.392043  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:13.392129  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:13.427112  171911 cri.go:89] found id: ""
	I0903 23:46:13.427149  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.427159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:13.427167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:13.427240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:13.462866  171911 cri.go:89] found id: ""
	I0903 23:46:13.462900  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.462908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:13.462915  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:13.462976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:13.498341  171911 cri.go:89] found id: ""
	I0903 23:46:13.498372  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.498381  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:13.498387  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:13.498440  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:13.543600  171911 cri.go:89] found id: ""
	I0903 23:46:13.543627  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.543636  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:13.543642  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:13.543696  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:13.578615  171911 cri.go:89] found id: ""
	I0903 23:46:13.578643  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.578651  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:13.578657  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:13.578720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:13.613164  171911 cri.go:89] found id: ""
	I0903 23:46:13.613190  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.613197  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:13.613204  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:13.613268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:13.648193  171911 cri.go:89] found id: ""
	I0903 23:46:13.648219  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.648227  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:13.648235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:13.648289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:13.692585  171911 cri.go:89] found id: ""
	I0903 23:46:13.692611  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.692619  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:13.692630  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:13.692649  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:13.709447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:13.709475  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:13.787419  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:13.787450  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:13.787466  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:13.876087  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:13.876121  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.922854  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:13.922882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.471424  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:16.489172  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:16.489260  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:16.523832  171911 cri.go:89] found id: ""
	I0903 23:46:16.523860  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.523867  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:16.523884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:16.523938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:16.561012  171911 cri.go:89] found id: ""
	I0903 23:46:16.561043  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.561051  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:16.561057  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:16.561112  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:16.595123  171911 cri.go:89] found id: ""
	I0903 23:46:16.595149  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.595156  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:16.595161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:16.595214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:16.629844  171911 cri.go:89] found id: ""
	I0903 23:46:16.629879  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.629887  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:16.629893  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:16.629946  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:16.665052  171911 cri.go:89] found id: ""
	I0903 23:46:16.665081  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.665089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:16.665103  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:16.665176  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:16.699559  171911 cri.go:89] found id: ""
	I0903 23:46:16.699591  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.699599  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:16.699607  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:16.699670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:16.734191  171911 cri.go:89] found id: ""
	I0903 23:46:16.734221  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.734229  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:16.734235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:16.734328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:16.770088  171911 cri.go:89] found id: ""
	I0903 23:46:16.770117  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.770125  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:16.770135  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:16.770150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.818779  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:16.818821  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:16.833000  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:16.833028  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:16.896259  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:16.896283  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:16.896301  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:16.973287  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:16.973330  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:19.513618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:19.533892  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:19.533986  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:19.575679  171911 cri.go:89] found id: ""
	I0903 23:46:19.575712  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.575722  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:19.575731  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:19.575803  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:19.623477  171911 cri.go:89] found id: ""
	I0903 23:46:19.623509  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.623517  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:19.623524  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:19.623592  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:19.663676  171911 cri.go:89] found id: ""
	I0903 23:46:19.663709  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.663718  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:19.663725  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:19.663792  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:19.698413  171911 cri.go:89] found id: ""
	I0903 23:46:19.698457  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.698466  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:19.698473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:19.698545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:19.734009  171911 cri.go:89] found id: ""
	I0903 23:46:19.734043  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.734051  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:19.734057  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:19.734124  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:19.770645  171911 cri.go:89] found id: ""
	I0903 23:46:19.770674  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.770682  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:19.770688  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:19.770749  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:19.805002  171911 cri.go:89] found id: ""
	I0903 23:46:19.805039  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.805051  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:19.805062  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:19.805134  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:19.839613  171911 cri.go:89] found id: ""
	I0903 23:46:19.839649  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.839659  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:19.839672  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:19.839687  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:19.892825  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:19.892868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:19.907172  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:19.907215  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:19.972520  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:19.972549  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:19.972563  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:20.047246  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:20.047313  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:22.586936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:22.603850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:22.603927  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:22.638907  171911 cri.go:89] found id: ""
	I0903 23:46:22.638936  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.638945  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:22.638954  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:22.639025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:22.674519  171911 cri.go:89] found id: ""
	I0903 23:46:22.674550  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.674557  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:22.674563  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:22.674623  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:22.709223  171911 cri.go:89] found id: ""
	I0903 23:46:22.709256  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.709267  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:22.709274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:22.709343  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:22.744699  171911 cri.go:89] found id: ""
	I0903 23:46:22.744732  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.744742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:22.744748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:22.744801  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:22.780192  171911 cri.go:89] found id: ""
	I0903 23:46:22.780226  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.780234  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:22.780240  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:22.780296  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:22.814575  171911 cri.go:89] found id: ""
	I0903 23:46:22.814606  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.814615  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:22.814621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:22.814674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:22.851385  171911 cri.go:89] found id: ""
	I0903 23:46:22.851415  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.851423  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:22.851429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:22.851480  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:22.884676  171911 cri.go:89] found id: ""
	I0903 23:46:22.884705  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.884713  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:22.884723  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:22.884734  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:22.935185  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:22.935223  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:22.949406  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:22.949442  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:23.012847  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:23.012877  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:23.012895  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:23.084409  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:23.084455  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:25.631753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:25.651358  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:25.651431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:25.685485  171911 cri.go:89] found id: ""
	I0903 23:46:25.685514  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.685523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:25.685528  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:25.685591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:25.720765  171911 cri.go:89] found id: ""
	I0903 23:46:25.720796  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.720804  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:25.720810  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:25.720867  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:25.754626  171911 cri.go:89] found id: ""
	I0903 23:46:25.754659  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.754670  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:25.754678  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:25.754731  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:25.789362  171911 cri.go:89] found id: ""
	I0903 23:46:25.789411  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.789421  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:25.789429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:25.789497  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:25.826469  171911 cri.go:89] found id: ""
	I0903 23:46:25.826502  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.826511  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:25.826519  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:25.826582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:25.861006  171911 cri.go:89] found id: ""
	I0903 23:46:25.861045  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.861057  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:25.861066  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:25.861141  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:25.895640  171911 cri.go:89] found id: ""
	I0903 23:46:25.895676  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.895687  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:25.895696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:25.895766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:25.930858  171911 cri.go:89] found id: ""
	I0903 23:46:25.930886  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.930894  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:25.930903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:25.930917  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:25.945023  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:25.945048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:26.011367  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:26.011401  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:26.011419  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:26.088648  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:26.088697  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:26.127560  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:26.127595  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:28.679659  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:28.696950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:28.697030  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:28.730995  171911 cri.go:89] found id: ""
	I0903 23:46:28.731026  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.731039  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:28.731047  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:28.731121  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:28.765348  171911 cri.go:89] found id: ""
	I0903 23:46:28.765377  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.765396  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:28.765404  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:28.765471  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:28.801427  171911 cri.go:89] found id: ""
	I0903 23:46:28.801459  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.801470  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:28.801478  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:28.801545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:28.836740  171911 cri.go:89] found id: ""
	I0903 23:46:28.836766  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.836775  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:28.836781  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:28.836865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:28.872484  171911 cri.go:89] found id: ""
	I0903 23:46:28.872517  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.872528  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:28.872538  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:28.872619  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:28.906796  171911 cri.go:89] found id: ""
	I0903 23:46:28.906840  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.906854  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:28.906864  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:28.906936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:28.941330  171911 cri.go:89] found id: ""
	I0903 23:46:28.941359  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.941367  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:28.941373  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:28.941447  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:28.975273  171911 cri.go:89] found id: ""
	I0903 23:46:28.975304  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.975316  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:28.975328  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:28.975351  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:29.013344  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:29.013374  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:29.062906  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:29.062943  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:29.077068  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:29.077094  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:29.141017  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:29.141041  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:29.141059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:31.720110  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:31.737478  171911 kubeadm.go:593] duration metric: took 4m4.418875365s to restartPrimaryControlPlane
	W0903 23:46:31.737562  171911 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0903 23:46:31.737592  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:46:36.182110  171911 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.444484741s)
	I0903 23:46:36.182205  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:46:36.197763  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:46:36.209295  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:46:36.220561  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:46:36.220584  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:46:36.220630  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:46:36.231194  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:46:36.231261  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:46:36.242263  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:46:36.252204  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:46:36.252278  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:46:36.263654  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.274160  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:46:36.274216  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.285535  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:46:36.296495  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:46:36.296566  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:46:36.308036  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:46:36.376723  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:46:36.376807  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:46:36.507237  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:46:36.507356  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:46:36.507451  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:46:36.676775  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:46:36.678771  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:46:36.678910  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:46:36.679002  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:46:36.679121  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:46:36.679204  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:46:36.679317  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:46:36.679385  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:46:36.679592  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:46:36.680075  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:46:36.680443  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:46:36.680690  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:46:36.680741  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:46:36.680801  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:46:37.040729  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:46:37.327107  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:46:37.592932  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:46:37.842405  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:46:37.860457  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:46:37.861477  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:46:37.861541  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:46:38.009088  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:46:38.010918  171911 out.go:252]   - Booting up control plane ...
	I0903 23:46:38.011062  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:46:38.018027  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:46:38.018106  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:46:38.018634  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:46:38.023296  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:47:18.025738  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:47:18.026296  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:18.026552  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:23.027174  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:23.027478  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:33.028031  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:33.028314  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:53.028650  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:53.028911  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031053  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:48:33.031367  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031406  171911 kubeadm.go:310] 
	I0903 23:48:33.031457  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:48:33.031522  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:48:33.031531  171911 kubeadm.go:310] 
	I0903 23:48:33.031571  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:48:33.031621  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:48:33.031747  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:48:33.031758  171911 kubeadm.go:310] 
	I0903 23:48:33.031898  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:48:33.031946  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:48:33.032002  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:48:33.032011  171911 kubeadm.go:310] 
	I0903 23:48:33.032171  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:48:33.032298  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:48:33.032308  171911 kubeadm.go:310] 
	I0903 23:48:33.032463  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:48:33.032612  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:48:33.032693  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:48:33.032780  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:48:33.032797  171911 kubeadm.go:310] 
	I0903 23:48:33.033539  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:48:33.033643  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:48:33.033735  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0903 23:48:33.033908  171911 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:48:33.033966  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:48:33.484811  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:48:33.501986  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:48:33.513610  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:48:33.513635  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:48:33.513694  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:48:33.524062  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:48:33.524128  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:48:33.534922  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:48:33.544314  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:48:33.544364  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:48:33.555345  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.565515  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:48:33.565578  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.576111  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:48:33.586276  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:48:33.586335  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:48:33.597298  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:48:33.791164  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:50:29.735983  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:50:29.736108  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:50:29.738473  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:50:29.738539  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:50:29.738632  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:50:29.738777  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:50:29.738908  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:50:29.738994  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:50:29.740823  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:50:29.740897  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:50:29.740956  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:50:29.741026  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:50:29.741099  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:50:29.741175  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:50:29.741225  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:50:29.741281  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:50:29.741336  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:50:29.741423  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:50:29.741518  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:50:29.741593  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:50:29.741669  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:50:29.741746  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:50:29.741831  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:50:29.741921  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:50:29.742004  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:50:29.742142  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:50:29.742267  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:50:29.742339  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:50:29.742442  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:50:29.744016  171911 out.go:252]   - Booting up control plane ...
	I0903 23:50:29.744169  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:50:29.744283  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:50:29.744364  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:50:29.744481  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:50:29.744722  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:50:29.744772  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:50:29.744856  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745144  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745256  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745481  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745588  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745791  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745882  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746079  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746151  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746327  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746336  171911 kubeadm.go:310] 
	I0903 23:50:29.746385  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:50:29.746439  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:50:29.746449  171911 kubeadm.go:310] 
	I0903 23:50:29.746505  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:50:29.746554  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:50:29.746678  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:50:29.746686  171911 kubeadm.go:310] 
	I0903 23:50:29.746808  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:50:29.746856  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:50:29.746908  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:50:29.746918  171911 kubeadm.go:310] 
	I0903 23:50:29.747078  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:50:29.747201  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:50:29.747208  171911 kubeadm.go:310] 
	I0903 23:50:29.747368  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:50:29.747487  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:50:29.747603  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:50:29.747684  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:50:29.747736  171911 kubeadm.go:310] 
	I0903 23:50:29.747765  171911 kubeadm.go:394] duration metric: took 8m2.477240692s to StartCluster
	I0903 23:50:29.747828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:50:29.747896  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:50:29.786098  171911 cri.go:89] found id: ""
	I0903 23:50:29.786144  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.786162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:50:29.786169  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:50:29.786251  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:50:29.819064  171911 cri.go:89] found id: ""
	I0903 23:50:29.819095  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.819103  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:50:29.819109  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:50:29.819164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:50:29.853192  171911 cri.go:89] found id: ""
	I0903 23:50:29.853223  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.853247  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:50:29.853255  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:50:29.853324  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:50:29.885949  171911 cri.go:89] found id: ""
	I0903 23:50:29.885979  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.885991  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:50:29.885999  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:50:29.886051  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:50:29.920423  171911 cri.go:89] found id: ""
	I0903 23:50:29.920451  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.920458  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:50:29.920464  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:50:29.920516  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:50:29.955106  171911 cri.go:89] found id: ""
	I0903 23:50:29.955142  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.955153  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:50:29.955161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:50:29.955241  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:50:29.988125  171911 cri.go:89] found id: ""
	I0903 23:50:29.988151  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.988159  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:50:29.988166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:50:29.988220  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:50:30.022768  171911 cri.go:89] found id: ""
	I0903 23:50:30.022795  171911 logs.go:282] 0 containers: []
	W0903 23:50:30.022803  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:50:30.022813  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:50:30.022828  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:50:30.059016  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:50:30.059049  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:50:30.108030  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:50:30.108065  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:50:30.121879  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:50:30.121906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:50:30.190324  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:50:30.190349  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:50:30.190362  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0903 23:50:30.296724  171911 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:50:30.296816  171911 out.go:285] * 
	* 
	W0903 23:50:30.296931  171911 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.296951  171911 out.go:285] * 
	* 
	W0903 23:50:30.299691  171911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:50:30.303743  171911 out.go:203] 
	W0903 23:50:30.304964  171911 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.305026  171911 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:50:30.305059  171911 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:50:30.306733  171911 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (246.411344ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ embed-certs-088493 image list --format=json                                                                                                                                                                                                 │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ default-k8s-diff-port-799704 image list --format=json                                                                                                                                                                                       │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-959437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ stop    │ -p newest-cni-959437 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-959437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ image   │ newest-cni-959437 image list --format=json                                                                                                                                                                                                  │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ pause   │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ unpause │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ stop    │ -p old-k8s-version-335468 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0 │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:41:58
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:41:58.777140  171911 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:41:58.777406  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777416  171911 out.go:374] Setting ErrFile to fd 2...
	I0903 23:41:58.777422  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777607  171911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:41:58.778141  171911 out.go:368] Setting JSON to false
	I0903 23:41:58.779000  171911 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8663,"bootTime":1756934256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:41:58.779090  171911 start.go:140] virtualization: kvm guest
	I0903 23:41:58.781253  171911 out.go:179] * [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:41:58.782571  171911 notify.go:220] Checking for updates...
	I0903 23:41:58.782584  171911 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:41:58.783694  171911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:41:58.784604  171911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:58.785686  171911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:41:58.786886  171911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:41:58.787874  171911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:41:58.789111  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:41:58.789531  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.789581  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.804713  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0903 23:41:58.805180  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.805760  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.805799  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.806176  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.806424  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.808193  171911 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0903 23:41:58.809451  171911 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:41:58.809758  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.809795  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.825067  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0903 23:41:58.825609  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.826091  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.826116  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.826506  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.826651  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.862143  171911 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:41:58.863156  171911 start.go:304] selected driver: kvm2
	I0903 23:41:58.863168  171911 start.go:918] validating driver "kvm2" against &{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.863278  171911 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:41:58.863960  171911 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.864040  171911 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:41:58.879770  171911 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:41:58.880346  171911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:41:58.880393  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:41:58.880445  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:58.880503  171911 start.go:348] cluster config:
	{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.880659  171911 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.882387  171911 out.go:179] * Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	I0903 23:41:58.883545  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:41:58.883582  171911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:41:58.883591  171911 cache.go:58] Caching tarball of preloaded images
	I0903 23:41:58.883679  171911 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:41:58.883689  171911 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:41:58.883774  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:41:58.883966  171911 start.go:360] acquireMachinesLock for old-k8s-version-335468: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:41:58.884013  171911 start.go:364] duration metric: took 27.848µs to acquireMachinesLock for "old-k8s-version-335468"
	I0903 23:41:58.884027  171911 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:41:58.884034  171911 fix.go:54] fixHost starting: 
	I0903 23:41:58.884290  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.884339  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.899629  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0903 23:41:58.900295  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.901063  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.901090  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.901496  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.901698  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.901857  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetState
	I0903 23:41:58.903463  171911 fix.go:112] recreateIfNeeded on old-k8s-version-335468: state=Stopped err=<nil>
	I0903 23:41:58.903488  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	W0903 23:41:58.903630  171911 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:41:58.905426  171911 out.go:252] * Restarting existing kvm2 VM for "old-k8s-version-335468" ...
	I0903 23:41:58.905455  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .Start
	I0903 23:41:58.905612  171911 main.go:141] libmachine: (old-k8s-version-335468) starting domain...
	I0903 23:41:58.905634  171911 main.go:141] libmachine: (old-k8s-version-335468) ensuring networks are active...
	I0903 23:41:58.906424  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network default is active
	I0903 23:41:58.906730  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network mk-old-k8s-version-335468 is active
	I0903 23:41:58.907059  171911 main.go:141] libmachine: (old-k8s-version-335468) getting domain XML...
	I0903 23:41:58.907800  171911 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:42:00.140356  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for IP...
	I0903 23:42:00.141202  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.141582  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.141709  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.141590  171947 retry.go:31] will retry after 276.832755ms: waiting for domain to come up
	I0903 23:42:00.420407  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.420855  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.420917  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.420836  171947 retry.go:31] will retry after 314.668622ms: waiting for domain to come up
	I0903 23:42:00.737468  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.737871  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.737901  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.737828  171947 retry.go:31] will retry after 345.8826ms: waiting for domain to come up
	I0903 23:42:01.085701  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.086185  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.086217  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.086168  171947 retry.go:31] will retry after 426.296812ms: waiting for domain to come up
	I0903 23:42:01.513991  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.514453  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.514482  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.514426  171947 retry.go:31] will retry after 602.972692ms: waiting for domain to come up
	I0903 23:42:02.119438  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.119856  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.119885  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.119827  171947 retry.go:31] will retry after 798.351499ms: waiting for domain to come up
	I0903 23:42:02.919839  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.920276  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.920307  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.920220  171947 retry.go:31] will retry after 1.022190105s: waiting for domain to come up
	I0903 23:42:03.944354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:03.944807  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:03.944840  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:03.944747  171947 retry.go:31] will retry after 1.29364095s: waiting for domain to come up
	I0903 23:42:05.240165  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:05.240547  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:05.240578  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:05.240525  171947 retry.go:31] will retry after 1.368503788s: waiting for domain to come up
	I0903 23:42:06.611109  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:06.611618  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:06.611652  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:06.611578  171947 retry.go:31] will retry after 2.084047059s: waiting for domain to come up
	I0903 23:42:08.698604  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:08.699065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:08.699089  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:08.699048  171947 retry.go:31] will retry after 2.491740737s: waiting for domain to come up
	I0903 23:42:11.193535  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:11.194024  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:11.194066  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:11.194000  171947 retry.go:31] will retry after 2.442590545s: waiting for domain to come up
	I0903 23:42:13.638462  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:13.638791  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:13.638812  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:13.638754  171947 retry.go:31] will retry after 4.493184117s: waiting for domain to come up
	I0903 23:42:18.134025  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134463  171911 main.go:141] libmachine: (old-k8s-version-335468) found domain IP: 192.168.61.80
	I0903 23:42:18.134496  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has current primary IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134511  171911 main.go:141] libmachine: (old-k8s-version-335468) reserving static IP address...
	I0903 23:42:18.134886  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.134919  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | skip adding static IP to network mk-old-k8s-version-335468 - found existing host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"}
	I0903 23:42:18.134935  171911 main.go:141] libmachine: (old-k8s-version-335468) reserved static IP address 192.168.61.80 for domain old-k8s-version-335468
	I0903 23:42:18.134949  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for SSH...
	I0903 23:42:18.134965  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:42:18.137067  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137412  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.137435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137591  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:42:18.137615  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:42:18.137661  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:42:18.137678  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:42:18.137689  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:42:18.265417  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: <nil>: 
	I0903 23:42:18.265809  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:42:18.266396  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.269013  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269322  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.269352  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269559  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:42:18.269795  171911 machine.go:93] provisionDockerMachine start ...
	I0903 23:42:18.269824  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:18.270044  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.272246  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272543  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.272584  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272665  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.272846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.272997  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.273116  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.273294  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.273564  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.273578  171911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:42:18.389858  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:42:18.389891  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390184  171911 buildroot.go:166] provisioning hostname "old-k8s-version-335468"
	I0903 23:42:18.390213  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390400  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.393065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393474  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.393508  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393629  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.393787  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.393963  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.394113  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.394288  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.394494  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.394507  171911 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-335468 && echo "old-k8s-version-335468" | sudo tee /etc/hostname
	I0903 23:42:18.526146  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-335468
	
	I0903 23:42:18.526174  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.528979  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529317  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.529341  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529521  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.529715  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.529887  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.530039  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.530198  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.530443  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.530462  171911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-335468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-335468/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-335468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:42:18.655502  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:42:18.655540  171911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:42:18.655578  171911 buildroot.go:174] setting up certificates
	I0903 23:42:18.655591  171911 provision.go:84] configureAuth start
	I0903 23:42:18.655604  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.655930  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.658889  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659364  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.659393  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659574  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.661700  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.661987  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.662012  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.662134  171911 provision.go:143] copyHostCerts
	I0903 23:42:18.662197  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:42:18.662222  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:42:18.662298  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:42:18.662418  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:42:18.662431  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:42:18.662468  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:42:18.662563  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:42:18.662573  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:42:18.662606  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:42:18.662675  171911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-335468 san=[127.0.0.1 192.168.61.80 localhost minikube old-k8s-version-335468]
	I0903 23:42:18.981415  171911 provision.go:177] copyRemoteCerts
	I0903 23:42:18.981472  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:42:18.981497  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.983969  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984256  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.984285  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984430  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.984657  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.984813  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.984946  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.073026  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:42:19.100256  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0903 23:42:19.127225  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:42:19.154111  171911 provision.go:87] duration metric: took 498.506096ms to configureAuth
	I0903 23:42:19.154138  171911 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:42:19.154358  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:42:19.154450  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.157159  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157588  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.157613  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157774  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.157993  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158192  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158345  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.158511  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.158713  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.158727  171911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:42:19.403450  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:42:19.403503  171911 machine.go:96] duration metric: took 1.133688609s to provisionDockerMachine
	I0903 23:42:19.403516  171911 start.go:293] postStartSetup for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:42:19.403546  171911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:42:19.403575  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.403961  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:42:19.403992  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.406435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406792  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.406820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406954  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.407146  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.407310  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.407431  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.498010  171911 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:42:19.502446  171911 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:42:19.502472  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:42:19.502533  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:42:19.502606  171911 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:42:19.502691  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:42:19.513148  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:19.539923  171911 start.go:296] duration metric: took 136.378767ms for postStartSetup
	I0903 23:42:19.539966  171911 fix.go:56] duration metric: took 20.655932447s for fixHost
	I0903 23:42:19.539987  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.542771  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543135  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.543163  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543432  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.543661  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.543924  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.544083  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.544239  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.544450  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.544464  171911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:42:19.658283  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942939.619184337
	
	I0903 23:42:19.658310  171911 fix.go:216] guest clock: 1756942939.619184337
	I0903 23:42:19.658320  171911 fix.go:229] Guest: 2025-09-03 23:42:19.619184337 +0000 UTC Remote: 2025-09-03 23:42:19.539969783 +0000 UTC m=+20.799287975 (delta=79.214554ms)
	I0903 23:42:19.658340  171911 fix.go:200] guest clock delta is within tolerance: 79.214554ms
	I0903 23:42:19.658346  171911 start.go:83] releasing machines lock for "old-k8s-version-335468", held for 20.774323746s
	I0903 23:42:19.658367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.658686  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:19.661465  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.661820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.661848  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.662028  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662525  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662702  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662785  171911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:42:19.662846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.662927  171911 ssh_runner.go:195] Run: cat /version.json
	I0903 23:42:19.662943  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.665354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665683  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665718  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.665740  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665938  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666142  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.666154  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666167  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.666342  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666528  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666520  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.666673  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666795  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.778070  171911 ssh_runner.go:195] Run: systemctl --version
	I0903 23:42:19.783809  171911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:42:19.925729  171911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:42:19.931814  171911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:42:19.931870  171911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:42:19.950008  171911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:42:19.950038  171911 start.go:495] detecting cgroup driver to use...
	I0903 23:42:19.950104  171911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:42:19.969078  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:42:19.984800  171911 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:42:19.984862  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:42:19.999909  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:42:20.014636  171911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:42:20.158742  171911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:42:20.297981  171911 docker.go:234] disabling docker service ...
	I0903 23:42:20.298074  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:42:20.314384  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:42:20.327885  171911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:42:20.530158  171911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:42:20.665612  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:42:20.680150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:42:20.700792  171911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:42:20.700857  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.712182  171911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:42:20.712258  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.723777  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.734863  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.746438  171911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:42:20.759910  171911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:42:20.769436  171911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:42:20.769493  171911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:42:20.788756  171911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:42:20.799437  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:20.954989  171911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:42:21.072550  171911 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:42:21.072649  171911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:42:21.077536  171911 start.go:563] Will wait 60s for crictl version
	I0903 23:42:21.077592  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:21.081093  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:42:21.119015  171911 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:42:21.119097  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.146341  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.176700  171911 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:42:21.177731  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:21.180269  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180568  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:21.180599  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180856  171911 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:42:21.185094  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:21.198784  171911 kubeadm.go:875] updating cluster {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:42:21.198887  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:42:21.198930  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:21.245403  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:21.245474  171911 ssh_runner.go:195] Run: which lz4
	I0903 23:42:21.249531  171911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:42:21.253934  171911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:42:21.253970  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:42:22.735338  171911 crio.go:462] duration metric: took 1.48583725s to copy over tarball
	I0903 23:42:22.735409  171911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:42:24.901192  171911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.165749867s)
	I0903 23:42:24.901224  171911 crio.go:469] duration metric: took 2.165856963s to extract the tarball
	I0903 23:42:24.901234  171911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:42:24.945210  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:24.977983  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:24.978011  171911 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:42:24.978093  171911 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:24.978095  171911 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.978122  171911 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.978134  171911 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.978092  171911 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.978167  171911 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.978180  171911 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.978151  171911 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979632  171911 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.979647  171911 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.979664  171911 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.979669  171911 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.979651  171911 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979683  171911 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.979708  171911 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.979715  171911 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:25.139789  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.149556  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.153427  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.156447  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.166085  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.178841  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.180227  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:42:25.223305  171911 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:42:25.223359  171911 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.223398  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.287785  171911 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:42:25.287834  171911 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.287879  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303285  171911 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:42:25.303336  171911 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.303345  171911 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:42:25.303383  171911 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.303392  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303431  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311751  171911 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:42:25.311798  171911 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.311803  171911 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:42:25.311842  171911 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.311855  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311888  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324120  171911 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:42:25.324164  171911 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:42:25.324187  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.324202  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324241  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.324655  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.324678  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.324906  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.325033  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.422314  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.422412  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.436779  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.479512  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.482280  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.482370  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.482417  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.528977  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.529015  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.566801  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.639814  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.639829  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.680104  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:42:25.680249  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.680257  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:42:25.724922  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:42:25.747501  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:42:25.747577  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:42:25.751768  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:42:25.760936  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:42:26.285671  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:26.426376  171911 cache_images.go:93] duration metric: took 1.448344647s to LoadCachedImages
	W0903 23:42:26.426480  171911 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0903 23:42:26.426499  171911 kubeadm.go:926] updating node { 192.168.61.80 8443 v1.20.0 crio true true} ...
	I0903 23:42:26.426618  171911 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-335468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:42:26.426702  171911 ssh_runner.go:195] Run: crio config
	I0903 23:42:26.476895  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:42:26.476919  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:42:26.476933  171911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:42:26.476956  171911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-335468 NodeName:old-k8s-version-335468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:42:26.477114  171911 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-335468"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:42:26.477233  171911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:42:26.490694  171911 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:42:26.490775  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:42:26.501798  171911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0903 23:42:26.520806  171911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:42:26.539068  171911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0903 23:42:26.558168  171911 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0903 23:42:26.562134  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:26.575449  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:26.711961  171911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:42:26.759354  171911 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468 for IP: 192.168.61.80
	I0903 23:42:26.759380  171911 certs.go:194] generating shared ca certs ...
	I0903 23:42:26.759407  171911 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:26.759577  171911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:42:26.759632  171911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:42:26.759646  171911 certs.go:256] generating profile certs ...
	I0903 23:42:26.759743  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key
	I0903 23:42:26.759820  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629
	I0903 23:42:26.759878  171911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key
	I0903 23:42:26.760013  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:42:26.760052  171911 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:42:26.760066  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:42:26.760099  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:42:26.760133  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:42:26.760167  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:42:26.760220  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:26.760811  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:42:26.791932  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:42:26.824575  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:42:26.853358  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:42:26.887411  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:42:26.914421  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:42:26.940984  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:42:26.968279  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:42:26.995059  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:42:27.023211  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:42:27.049929  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:42:27.076578  171911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:42:27.095209  171911 ssh_runner.go:195] Run: openssl version
	I0903 23:42:27.100879  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:42:27.112933  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118040  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118090  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.125341  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:42:27.140002  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:42:27.154488  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159574  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159635  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.166580  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:42:27.180666  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:42:27.194853  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199793  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199841  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.206851  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:42:27.221163  171911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:42:27.226347  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:42:27.233982  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:42:27.241290  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:42:27.248464  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:42:27.255916  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:42:27.263308  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:42:27.270533  171911 kubeadm.go:392] StartCluster: {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:42:27.270648  171911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:42:27.270739  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.306525  171911 cri.go:89] found id: ""
	I0903 23:42:27.306598  171911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:42:27.318570  171911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:42:27.318592  171911 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:42:27.318639  171911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:42:27.329789  171911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:42:27.330196  171911 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:42:27.330362  171911 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-335468" cluster setting kubeconfig missing "old-k8s-version-335468" context setting]
	I0903 23:42:27.330702  171911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:27.374758  171911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:42:27.386214  171911 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.61.80
	I0903 23:42:27.386258  171911 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:42:27.386272  171911 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:42:27.386331  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.425149  171911 cri.go:89] found id: ""
	I0903 23:42:27.425215  171911 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:42:27.445596  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:42:27.456478  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:42:27.456499  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:42:27.456562  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:42:27.466434  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:42:27.466490  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:42:27.477542  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:42:27.487494  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:42:27.487556  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:42:27.498329  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.508036  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:42:27.508096  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.521941  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:42:27.531852  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:42:27.531907  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:42:27.542155  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:42:27.553239  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:27.633226  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.602124  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.854495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.947073  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:29.027974  171911 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:42:29.028070  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:29.528786  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.029080  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.529093  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.029115  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.528486  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.029181  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.528450  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.028477  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.529071  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.028981  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.528195  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.028453  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.528706  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.028199  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.528759  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.528169  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.528882  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.028560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.528880  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.029029  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.528664  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.028784  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.528383  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.028492  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.528853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.028647  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.528940  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.028219  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.528661  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.029081  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.528521  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.028610  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.529168  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.028585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.528452  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.028847  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.528533  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.028538  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.529012  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.029175  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.528266  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.028443  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.528936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.028174  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.528782  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.028946  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.529016  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.029217  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.528827  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.028743  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.528564  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.029013  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.528850  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.028379  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.528543  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.028863  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.528547  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.028618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.528316  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.028825  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.528728  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.028929  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.528618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.028774  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.528830  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.028902  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.528997  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.028460  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.529085  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.028814  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.528240  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.028382  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.528648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.028776  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.528630  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.028650  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.528498  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.028874  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.529055  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.028335  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.528817  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.029166  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.528517  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.028284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.528580  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.028324  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.528516  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.028872  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.529100  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.029032  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.528427  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.028297  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.528182  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.028871  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.528931  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.028363  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.528960  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.028522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.528560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.028879  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.528155  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.028536  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.528372  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.028985  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.529094  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.028627  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.529025  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.028457  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.528968  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.028323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.528323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.028859  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.528886  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.028648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.528292  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.028496  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.528556  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:29.028482  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:29.028567  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:29.065203  171911 cri.go:89] found id: ""
	I0903 23:43:29.065238  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.065249  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:29.065257  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:29.065323  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:29.099969  171911 cri.go:89] found id: ""
	I0903 23:43:29.100008  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.100020  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:29.100030  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:29.100100  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:29.134038  171911 cri.go:89] found id: ""
	I0903 23:43:29.134075  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.134088  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:29.134096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:29.134166  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:29.167976  171911 cri.go:89] found id: ""
	I0903 23:43:29.168009  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.168018  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:29.168025  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:29.168081  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:29.203375  171911 cri.go:89] found id: ""
	I0903 23:43:29.203406  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.203414  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:29.203420  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:29.203487  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:29.237316  171911 cri.go:89] found id: ""
	I0903 23:43:29.237347  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.237358  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:29.237366  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:29.237456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:29.271010  171911 cri.go:89] found id: ""
	I0903 23:43:29.271036  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.271044  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:29.271051  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:29.271115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:29.305355  171911 cri.go:89] found id: ""
	I0903 23:43:29.305398  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.305410  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:29.305424  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:29.305450  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:29.343610  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:29.343647  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:29.390474  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:29.390513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:29.404227  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:29.404255  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:29.473354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:29.473377  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:29.473409  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.045578  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:32.064442  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:32.064510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:32.104125  171911 cri.go:89] found id: ""
	I0903 23:43:32.104153  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.104162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:32.104167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:32.104219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:32.140304  171911 cri.go:89] found id: ""
	I0903 23:43:32.140344  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.140357  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:32.140366  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:32.140436  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:32.174194  171911 cri.go:89] found id: ""
	I0903 23:43:32.174227  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.174241  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:32.174249  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:32.174322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:32.207732  171911 cri.go:89] found id: ""
	I0903 23:43:32.207760  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.207768  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:32.207775  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:32.207828  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:32.242885  171911 cri.go:89] found id: ""
	I0903 23:43:32.242919  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.242927  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:32.242934  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:32.242991  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:32.276911  171911 cri.go:89] found id: ""
	I0903 23:43:32.276938  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.276945  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:32.276952  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:32.277004  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:32.310660  171911 cri.go:89] found id: ""
	I0903 23:43:32.310689  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.310697  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:32.310703  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:32.310753  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:32.344285  171911 cri.go:89] found id: ""
	I0903 23:43:32.344316  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.344327  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:32.344341  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:32.344357  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:32.394031  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:32.394079  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:32.408165  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:32.408199  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:32.473250  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:32.473279  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:32.473293  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.556677  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:32.556722  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.104790  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:35.121004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:35.121069  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:35.153087  171911 cri.go:89] found id: ""
	I0903 23:43:35.153118  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.153126  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:35.153133  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:35.153187  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:35.185837  171911 cri.go:89] found id: ""
	I0903 23:43:35.185877  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.185885  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:35.185891  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:35.185947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:35.219367  171911 cri.go:89] found id: ""
	I0903 23:43:35.219410  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.219421  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:35.219430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:35.219491  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:35.253170  171911 cri.go:89] found id: ""
	I0903 23:43:35.253204  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.253218  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:35.253239  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:35.253325  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:35.285565  171911 cri.go:89] found id: ""
	I0903 23:43:35.285599  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.285611  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:35.285620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:35.285688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:35.319446  171911 cri.go:89] found id: ""
	I0903 23:43:35.319476  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.319484  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:35.319490  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:35.319541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:35.354359  171911 cri.go:89] found id: ""
	I0903 23:43:35.354387  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.354394  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:35.354400  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:35.354452  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:35.390780  171911 cri.go:89] found id: ""
	I0903 23:43:35.390815  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.390825  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:35.390837  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:35.390852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:35.465751  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:35.465790  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.504480  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:35.504517  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:35.554283  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:35.554318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:35.567404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:35.567436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:35.629663  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.130296  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:38.146915  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:38.147003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:38.179729  171911 cri.go:89] found id: ""
	I0903 23:43:38.179768  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.179781  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:38.179791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:38.179863  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:38.212185  171911 cri.go:89] found id: ""
	I0903 23:43:38.212215  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.212227  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:38.212235  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:38.212322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:38.245927  171911 cri.go:89] found id: ""
	I0903 23:43:38.245953  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.245960  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:38.245966  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:38.246027  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:38.280868  171911 cri.go:89] found id: ""
	I0903 23:43:38.280900  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.280911  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:38.280918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:38.281003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:38.321240  171911 cri.go:89] found id: ""
	I0903 23:43:38.321275  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.321288  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:38.321298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:38.321407  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:38.375140  171911 cri.go:89] found id: ""
	I0903 23:43:38.375169  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.375183  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:38.375191  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:38.375277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:38.418890  171911 cri.go:89] found id: ""
	I0903 23:43:38.418928  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.418940  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:38.418950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:38.419019  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:38.452908  171911 cri.go:89] found id: ""
	I0903 23:43:38.452938  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.452949  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:38.452962  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:38.452978  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:38.503416  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:38.503460  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:38.517203  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:38.517233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:38.580070  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.580096  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:38.580110  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:38.652380  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:38.652420  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.192031  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:41.208483  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:41.208546  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:41.241854  171911 cri.go:89] found id: ""
	I0903 23:43:41.241880  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.241887  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:41.241895  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:41.241953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:41.276043  171911 cri.go:89] found id: ""
	I0903 23:43:41.276070  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.276078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:41.276084  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:41.276136  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:41.312473  171911 cri.go:89] found id: ""
	I0903 23:43:41.312503  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.312514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:41.312522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:41.312591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:41.345515  171911 cri.go:89] found id: ""
	I0903 23:43:41.345543  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.345551  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:41.345558  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:41.345630  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:41.378505  171911 cri.go:89] found id: ""
	I0903 23:43:41.378539  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.378547  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:41.378554  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:41.378613  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:41.414245  171911 cri.go:89] found id: ""
	I0903 23:43:41.414276  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.414284  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:41.414290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:41.414351  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:41.450931  171911 cri.go:89] found id: ""
	I0903 23:43:41.450969  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.450981  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:41.451050  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:41.451126  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:41.484869  171911 cri.go:89] found id: ""
	I0903 23:43:41.484898  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.484906  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:41.484916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:41.484934  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:41.498189  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:41.498219  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:41.560558  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:41.560583  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:41.560601  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:41.637195  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:41.637235  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.675448  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:41.675478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.223401  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:44.253341  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:44.253423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:44.300478  171911 cri.go:89] found id: ""
	I0903 23:43:44.300512  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.300523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:44.300531  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:44.300625  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:44.342127  171911 cri.go:89] found id: ""
	I0903 23:43:44.342158  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.342166  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:44.342178  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:44.342242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:44.392479  171911 cri.go:89] found id: ""
	I0903 23:43:44.392505  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.392514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:44.392522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:44.392587  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:44.428584  171911 cri.go:89] found id: ""
	I0903 23:43:44.428627  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.428646  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:44.428655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:44.428724  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:44.463165  171911 cri.go:89] found id: ""
	I0903 23:43:44.463196  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.463205  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:44.463214  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:44.463276  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:44.497562  171911 cri.go:89] found id: ""
	I0903 23:43:44.497599  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.497606  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:44.497616  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:44.497671  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:44.532319  171911 cri.go:89] found id: ""
	I0903 23:43:44.532349  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.532356  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:44.532371  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:44.532431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:44.567181  171911 cri.go:89] found id: ""
	I0903 23:43:44.567214  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.567229  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:44.567242  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:44.567259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:44.647186  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:44.647237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:44.684779  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:44.684815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.734346  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:44.734384  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:44.748304  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:44.748333  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:44.811995  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.313737  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:47.330976  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:47.331047  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:47.365152  171911 cri.go:89] found id: ""
	I0903 23:43:47.365183  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.365191  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:47.365198  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:47.365250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:47.402002  171911 cri.go:89] found id: ""
	I0903 23:43:47.402034  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.402042  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:47.402048  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:47.402103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:47.439574  171911 cri.go:89] found id: ""
	I0903 23:43:47.439611  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.439619  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:47.439626  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:47.439694  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:47.474877  171911 cri.go:89] found id: ""
	I0903 23:43:47.474910  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.474918  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:47.474925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:47.474980  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:47.511850  171911 cri.go:89] found id: ""
	I0903 23:43:47.511882  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.511889  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:47.511896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:47.511952  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:47.545975  171911 cri.go:89] found id: ""
	I0903 23:43:47.546011  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.546022  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:47.546032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:47.546091  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:47.581967  171911 cri.go:89] found id: ""
	I0903 23:43:47.581996  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.582004  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:47.582010  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:47.582079  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:47.617442  171911 cri.go:89] found id: ""
	I0903 23:43:47.617470  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.617478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:47.617487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:47.617499  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:47.655119  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:47.655150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:47.702001  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:47.702035  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:47.715671  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:47.715701  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:47.781271  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.781297  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:47.781310  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.353562  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:50.370200  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:50.370271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:50.404593  171911 cri.go:89] found id: ""
	I0903 23:43:50.404621  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.404631  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:50.404640  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:50.404714  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:50.438454  171911 cri.go:89] found id: ""
	I0903 23:43:50.438482  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.438491  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:50.438498  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:50.438609  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:50.474138  171911 cri.go:89] found id: ""
	I0903 23:43:50.474165  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.474176  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:50.474184  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:50.474247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:50.506277  171911 cri.go:89] found id: ""
	I0903 23:43:50.506308  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.506319  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:50.506328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:50.506398  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:50.540877  171911 cri.go:89] found id: ""
	I0903 23:43:50.540905  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.540912  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:50.540918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:50.540969  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:50.574490  171911 cri.go:89] found id: ""
	I0903 23:43:50.574548  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.574567  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:50.574578  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:50.574654  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:50.608197  171911 cri.go:89] found id: ""
	I0903 23:43:50.608225  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.608233  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:50.608238  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:50.608288  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:50.641053  171911 cri.go:89] found id: ""
	I0903 23:43:50.641082  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.641089  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:50.641099  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:50.641109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.712696  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:50.712742  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:50.749969  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:50.750001  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:50.800039  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:50.800074  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:50.813705  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:50.813736  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:50.876873  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.378585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:53.395927  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:53.395997  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:53.429784  171911 cri.go:89] found id: ""
	I0903 23:43:53.429814  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.429821  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:53.429827  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:53.429880  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:53.463718  171911 cri.go:89] found id: ""
	I0903 23:43:53.463745  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.463753  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:53.463759  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:53.463815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:53.499017  171911 cri.go:89] found id: ""
	I0903 23:43:53.499046  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.499056  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:53.499065  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:53.499132  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:53.534239  171911 cri.go:89] found id: ""
	I0903 23:43:53.534273  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.534283  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:53.534290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:53.534353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:53.567405  171911 cri.go:89] found id: ""
	I0903 23:43:53.567431  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.567438  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:53.567445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:53.567500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:53.603686  171911 cri.go:89] found id: ""
	I0903 23:43:53.603722  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.603733  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:53.603742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:53.603805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:53.638591  171911 cri.go:89] found id: ""
	I0903 23:43:53.638618  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.638627  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:53.638635  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:53.638698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:53.672243  171911 cri.go:89] found id: ""
	I0903 23:43:53.672288  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.672296  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:53.672305  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:53.672318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:53.721410  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:53.721448  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:53.735356  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:53.735386  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:53.797966  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.797988  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:53.798005  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:53.872491  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:53.872529  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.410853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:56.427796  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:56.427871  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:56.460023  171911 cri.go:89] found id: ""
	I0903 23:43:56.460066  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.460077  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:56.460085  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:56.460160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:56.494386  171911 cri.go:89] found id: ""
	I0903 23:43:56.494414  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.494424  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:56.494432  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:56.494492  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:56.529298  171911 cri.go:89] found id: ""
	I0903 23:43:56.529329  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.529339  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:56.529356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:56.529433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:56.562775  171911 cri.go:89] found id: ""
	I0903 23:43:56.562818  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.562830  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:56.562837  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:56.562898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:56.604698  171911 cri.go:89] found id: ""
	I0903 23:43:56.604739  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.604751  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:56.604758  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:56.604811  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:56.644278  171911 cri.go:89] found id: ""
	I0903 23:43:56.644307  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.644319  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:56.644328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:56.644397  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:56.686334  171911 cri.go:89] found id: ""
	I0903 23:43:56.686366  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.686377  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:56.686385  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:56.686458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:56.725441  171911 cri.go:89] found id: ""
	I0903 23:43:56.725466  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.725486  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:56.725494  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:56.725508  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:56.791969  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:56.792002  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:56.792021  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:56.866297  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:56.866338  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.904335  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:56.904372  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:56.952822  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:56.952863  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:59.466793  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:59.484556  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:59.484633  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:59.521818  171911 cri.go:89] found id: ""
	I0903 23:43:59.521848  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.521860  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:59.521868  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:59.521945  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:59.556474  171911 cri.go:89] found id: ""
	I0903 23:43:59.556501  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.556509  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:59.556515  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:59.556569  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:59.591410  171911 cri.go:89] found id: ""
	I0903 23:43:59.591440  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.591447  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:59.591453  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:59.591503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:59.625559  171911 cri.go:89] found id: ""
	I0903 23:43:59.625587  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.625593  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:59.625615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:59.625668  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:59.659603  171911 cri.go:89] found id: ""
	I0903 23:43:59.659635  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.659643  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:59.659655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:59.659713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:59.700514  171911 cri.go:89] found id: ""
	I0903 23:43:59.700553  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.700566  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:59.700576  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:59.700669  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:59.734778  171911 cri.go:89] found id: ""
	I0903 23:43:59.734805  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.734816  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:59.734824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:59.734884  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:59.769663  171911 cri.go:89] found id: ""
	I0903 23:43:59.769703  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.769714  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:59.769727  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:59.769743  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:59.832033  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:59.832056  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:59.832075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:59.905304  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:59.905348  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:59.942790  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:59.942823  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:59.992617  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:59.992660  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.508378  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:02.525572  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:02.525652  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:02.561330  171911 cri.go:89] found id: ""
	I0903 23:44:02.561361  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.561369  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:02.561375  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:02.561461  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:02.595933  171911 cri.go:89] found id: ""
	I0903 23:44:02.595962  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.595970  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:02.595975  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:02.596041  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:02.628817  171911 cri.go:89] found id: ""
	I0903 23:44:02.628854  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.628865  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:02.628873  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:02.628944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:02.665027  171911 cri.go:89] found id: ""
	I0903 23:44:02.665060  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.665072  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:02.665079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:02.665143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:02.698721  171911 cri.go:89] found id: ""
	I0903 23:44:02.698752  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.698761  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:02.698768  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:02.698822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:02.736138  171911 cri.go:89] found id: ""
	I0903 23:44:02.736170  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.736180  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:02.736188  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:02.736254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:02.770089  171911 cri.go:89] found id: ""
	I0903 23:44:02.770120  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.770127  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:02.770134  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:02.770201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:02.805595  171911 cri.go:89] found id: ""
	I0903 23:44:02.805627  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.805638  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:02.805650  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:02.805666  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:02.855714  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:02.855753  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.870817  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:02.870854  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:02.935987  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:02.936011  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:02.936025  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:03.013471  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:03.013513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:05.553522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:05.570805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:05.570869  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:05.606023  171911 cri.go:89] found id: ""
	I0903 23:44:05.606061  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.606075  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:05.606084  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:05.606151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:05.640331  171911 cri.go:89] found id: ""
	I0903 23:44:05.640362  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.640374  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:05.640380  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:05.640455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:05.675579  171911 cri.go:89] found id: ""
	I0903 23:44:05.675613  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.675626  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:05.675634  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:05.675698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:05.710190  171911 cri.go:89] found id: ""
	I0903 23:44:05.710219  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.710226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:05.710233  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:05.710292  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:05.745803  171911 cri.go:89] found id: ""
	I0903 23:44:05.745834  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.745843  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:05.745850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:05.745908  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:05.780095  171911 cri.go:89] found id: ""
	I0903 23:44:05.780126  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.780134  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:05.780141  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:05.780193  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:05.812816  171911 cri.go:89] found id: ""
	I0903 23:44:05.812849  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.812862  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:05.812870  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:05.812944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:05.845992  171911 cri.go:89] found id: ""
	I0903 23:44:05.846024  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.846032  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:05.846041  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:05.846053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:05.896122  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:05.896163  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:05.910777  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:05.910815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:05.973743  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:05.973771  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:05.973784  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:06.047880  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:06.047924  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.588751  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:08.605926  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:08.605989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:08.639229  171911 cri.go:89] found id: ""
	I0903 23:44:08.639260  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.639268  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:08.639275  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:08.639332  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:08.673218  171911 cri.go:89] found id: ""
	I0903 23:44:08.673263  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.673274  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:08.673283  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:08.673353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:08.708635  171911 cri.go:89] found id: ""
	I0903 23:44:08.708665  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.708676  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:08.708685  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:08.708755  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:08.744277  171911 cri.go:89] found id: ""
	I0903 23:44:08.744304  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.744311  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:08.744318  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:08.744385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:08.778421  171911 cri.go:89] found id: ""
	I0903 23:44:08.778451  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.778469  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:08.778477  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:08.778541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:08.815240  171911 cri.go:89] found id: ""
	I0903 23:44:08.815277  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.815290  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:08.815298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:08.815371  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:08.849900  171911 cri.go:89] found id: ""
	I0903 23:44:08.849929  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.849936  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:08.849942  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:08.849993  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:08.885596  171911 cri.go:89] found id: ""
	I0903 23:44:08.885631  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.885641  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:08.885651  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:08.885668  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.924882  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:08.924909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:08.976269  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:08.976304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:08.993447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:08.993483  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:09.069817  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:09.069845  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:09.069862  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:11.651779  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:11.668352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:11.668423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:11.703206  171911 cri.go:89] found id: ""
	I0903 23:44:11.703243  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.703255  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:11.703264  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:11.703357  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:11.737323  171911 cri.go:89] found id: ""
	I0903 23:44:11.737367  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.737380  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:11.737402  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:11.737479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:11.771970  171911 cri.go:89] found id: ""
	I0903 23:44:11.772010  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.772021  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:11.772030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:11.772104  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:11.806342  171911 cri.go:89] found id: ""
	I0903 23:44:11.806386  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.806397  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:11.806406  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:11.806483  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:11.843136  171911 cri.go:89] found id: ""
	I0903 23:44:11.843170  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.843181  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:11.843189  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:11.843259  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:11.877246  171911 cri.go:89] found id: ""
	I0903 23:44:11.877285  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.877296  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:11.877306  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:11.877379  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:11.915257  171911 cri.go:89] found id: ""
	I0903 23:44:11.915295  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.915308  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:11.915317  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:11.915396  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:11.949271  171911 cri.go:89] found id: ""
	I0903 23:44:11.949300  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.949310  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:11.949323  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:11.949342  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:11.962921  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:11.962954  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:12.025549  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:12.025580  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:12.025596  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:12.099077  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:12.099120  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:12.136408  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:12.136446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:14.686632  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:14.704032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:14.704101  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:14.739046  171911 cri.go:89] found id: ""
	I0903 23:44:14.739076  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.739084  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:14.739091  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:14.739156  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:14.775028  171911 cri.go:89] found id: ""
	I0903 23:44:14.775066  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.775078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:14.775087  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:14.775150  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:14.808896  171911 cri.go:89] found id: ""
	I0903 23:44:14.808928  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.808939  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:14.808947  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:14.809014  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:14.844967  171911 cri.go:89] found id: ""
	I0903 23:44:14.844998  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.845010  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:14.845018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:14.845087  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:14.878706  171911 cri.go:89] found id: ""
	I0903 23:44:14.878734  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.878742  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:14.878750  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:14.878824  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:14.914368  171911 cri.go:89] found id: ""
	I0903 23:44:14.914407  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.914420  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:14.914429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:14.914523  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:14.949846  171911 cri.go:89] found id: ""
	I0903 23:44:14.949873  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.949881  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:14.949888  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:14.949956  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:14.985479  171911 cri.go:89] found id: ""
	I0903 23:44:14.985511  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.985522  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:14.985534  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:14.985550  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:15.036097  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:15.036141  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:15.050336  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:15.050365  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:15.116416  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:15.116439  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:15.116457  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:15.193453  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:15.193498  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:17.731284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:17.748791  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:17.748854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:17.784857  171911 cri.go:89] found id: ""
	I0903 23:44:17.784884  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.784892  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:17.784897  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:17.784953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:17.819838  171911 cri.go:89] found id: ""
	I0903 23:44:17.819867  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.819875  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:17.819881  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:17.819932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:17.853453  171911 cri.go:89] found id: ""
	I0903 23:44:17.853482  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.853489  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:17.853496  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:17.853553  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:17.887886  171911 cri.go:89] found id: ""
	I0903 23:44:17.887915  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.887923  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:17.887930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:17.887985  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:17.923140  171911 cri.go:89] found id: ""
	I0903 23:44:17.923172  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.923183  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:17.923190  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:17.923258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:17.957595  171911 cri.go:89] found id: ""
	I0903 23:44:17.957625  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.957638  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:17.957647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:17.957717  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:17.990247  171911 cri.go:89] found id: ""
	I0903 23:44:17.990276  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.990284  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:17.990290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:17.990362  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:18.024643  171911 cri.go:89] found id: ""
	I0903 23:44:18.024673  171911 logs.go:282] 0 containers: []
	W0903 23:44:18.024685  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:18.024697  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:18.024713  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:18.076397  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:18.076436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:18.090204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:18.090233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:18.163020  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:18.163044  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:18.163059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:18.240276  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:18.240314  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:20.781710  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:20.798871  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:20.798939  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:20.833834  171911 cri.go:89] found id: ""
	I0903 23:44:20.833867  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.833875  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:20.833881  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:20.833936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:20.868536  171911 cri.go:89] found id: ""
	I0903 23:44:20.868569  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.868577  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:20.868583  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:20.868639  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:20.902513  171911 cri.go:89] found id: ""
	I0903 23:44:20.902546  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.902557  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:20.902570  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:20.902644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:20.935967  171911 cri.go:89] found id: ""
	I0903 23:44:20.935994  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.936001  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:20.936007  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:20.936070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:20.969967  171911 cri.go:89] found id: ""
	I0903 23:44:20.969995  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.970003  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:20.970009  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:20.970067  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:21.005097  171911 cri.go:89] found id: ""
	I0903 23:44:21.005130  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.005149  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:21.005158  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:21.005231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:21.040315  171911 cri.go:89] found id: ""
	I0903 23:44:21.040350  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.040357  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:21.040364  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:21.040431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:21.075411  171911 cri.go:89] found id: ""
	I0903 23:44:21.075447  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.075456  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:21.075466  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:21.075478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:21.125281  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:21.125322  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:21.139605  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:21.139635  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:21.203960  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:21.203986  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:21.204004  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:21.278167  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:21.278211  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:23.820132  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:23.839119  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:23.839184  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:23.883827  171911 cri.go:89] found id: ""
	I0903 23:44:23.883864  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.883876  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:23.883884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:23.883943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:23.929729  171911 cri.go:89] found id: ""
	I0903 23:44:23.929756  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.929765  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:23.929771  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:23.929822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:23.962676  171911 cri.go:89] found id: ""
	I0903 23:44:23.962708  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.962716  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:23.962722  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:23.962778  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:23.995464  171911 cri.go:89] found id: ""
	I0903 23:44:23.995505  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.995516  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:23.995522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:23.995586  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:24.030690  171911 cri.go:89] found id: ""
	I0903 23:44:24.030718  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.030726  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:24.030733  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:24.030791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:24.064311  171911 cri.go:89] found id: ""
	I0903 23:44:24.064338  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.064346  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:24.064352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:24.064408  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:24.098888  171911 cri.go:89] found id: ""
	I0903 23:44:24.098917  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.098924  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:24.098930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:24.098990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:24.135030  171911 cri.go:89] found id: ""
	I0903 23:44:24.135057  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.135064  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:24.135074  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:24.135086  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:24.185228  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:24.185266  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:24.198908  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:24.198937  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:24.260291  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:24.260337  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:24.260355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:24.337581  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:24.337620  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:26.876959  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:26.893615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:26.893679  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:26.926745  171911 cri.go:89] found id: ""
	I0903 23:44:26.926776  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.926784  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:26.926791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:26.926848  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:26.959697  171911 cri.go:89] found id: ""
	I0903 23:44:26.959727  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.959735  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:26.959742  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:26.959795  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:26.991963  171911 cri.go:89] found id: ""
	I0903 23:44:26.991996  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.992004  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:26.992011  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:26.992064  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:27.025939  171911 cri.go:89] found id: ""
	I0903 23:44:27.025978  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.025989  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:27.025997  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:27.026065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:27.058572  171911 cri.go:89] found id: ""
	I0903 23:44:27.058598  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.058606  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:27.058612  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:27.058666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:27.092277  171911 cri.go:89] found id: ""
	I0903 23:44:27.092309  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.092318  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:27.092324  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:27.092385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:27.127742  171911 cri.go:89] found id: ""
	I0903 23:44:27.127777  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.127789  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:27.127798  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:27.127872  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:27.162425  171911 cri.go:89] found id: ""
	I0903 23:44:27.162463  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.162474  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:27.162487  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:27.162503  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:27.213126  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:27.213165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:27.226983  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:27.227013  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:27.293122  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:27.293152  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:27.293169  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:27.368497  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:27.368538  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:29.907183  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:29.924079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:29.924172  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:29.957813  171911 cri.go:89] found id: ""
	I0903 23:44:29.957843  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.957851  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:29.957857  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:29.957919  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:29.992782  171911 cri.go:89] found id: ""
	I0903 23:44:29.992812  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.992819  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:29.992826  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:29.992888  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:30.026629  171911 cri.go:89] found id: ""
	I0903 23:44:30.026664  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.026674  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:30.026682  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:30.026756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:30.060035  171911 cri.go:89] found id: ""
	I0903 23:44:30.060074  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.060083  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:30.060092  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:30.060154  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:30.101281  171911 cri.go:89] found id: ""
	I0903 23:44:30.101319  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.101330  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:30.101338  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:30.101419  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:30.146884  171911 cri.go:89] found id: ""
	I0903 23:44:30.146911  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.146918  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:30.146925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:30.146989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:30.180988  171911 cri.go:89] found id: ""
	I0903 23:44:30.181016  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.181024  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:30.181030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:30.181103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:30.214648  171911 cri.go:89] found id: ""
	I0903 23:44:30.214679  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.214687  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:30.214696  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:30.214709  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:30.262757  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:30.262799  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:30.283299  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:30.283331  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:30.366919  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:30.366945  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:30.366959  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:30.442612  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:30.442654  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:32.981733  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:32.999850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:32.999930  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:33.040618  171911 cri.go:89] found id: ""
	I0903 23:44:33.040653  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.040664  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:33.040671  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:33.040738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:33.081786  171911 cri.go:89] found id: ""
	I0903 23:44:33.081818  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.081829  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:33.081836  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:33.081906  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:33.125847  171911 cri.go:89] found id: ""
	I0903 23:44:33.125878  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.125888  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:33.125896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:33.125962  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:33.167437  171911 cri.go:89] found id: ""
	I0903 23:44:33.167465  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.167473  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:33.167481  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:33.167557  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:33.208145  171911 cri.go:89] found id: ""
	I0903 23:44:33.208177  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.208185  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:33.208192  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:33.208248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:33.250045  171911 cri.go:89] found id: ""
	I0903 23:44:33.250074  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.250081  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:33.250087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:33.250139  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:33.289576  171911 cri.go:89] found id: ""
	I0903 23:44:33.289607  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.289615  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:33.289621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:33.289676  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:33.325452  171911 cri.go:89] found id: ""
	I0903 23:44:33.325485  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.325493  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:33.325503  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:33.325515  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:33.403967  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:33.404018  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:33.441581  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:33.441619  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:33.488744  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:33.488794  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:33.502603  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:33.502648  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:33.567447  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.069781  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:36.093945  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:36.094023  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:36.138900  171911 cri.go:89] found id: ""
	I0903 23:44:36.138929  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.138940  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:36.138950  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:36.139016  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:36.174814  171911 cri.go:89] found id: ""
	I0903 23:44:36.174841  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.174849  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:36.174855  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:36.174918  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:36.211574  171911 cri.go:89] found id: ""
	I0903 23:44:36.211604  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.211611  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:36.211618  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:36.211670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:36.245780  171911 cri.go:89] found id: ""
	I0903 23:44:36.245812  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.245823  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:36.245830  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:36.245886  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:36.280576  171911 cri.go:89] found id: ""
	I0903 23:44:36.280606  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.280614  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:36.280620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:36.280674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:36.315469  171911 cri.go:89] found id: ""
	I0903 23:44:36.315504  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.315515  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:36.315524  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:36.315582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:36.349983  171911 cri.go:89] found id: ""
	I0903 23:44:36.350018  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.350027  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:36.350033  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:36.350083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:36.384827  171911 cri.go:89] found id: ""
	I0903 23:44:36.384857  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.384866  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:36.384877  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:36.384896  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:36.398999  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:36.399029  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:36.467458  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.467492  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:36.467507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:36.546881  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:36.546922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:36.584400  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:36.584437  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.135283  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:39.152700  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:39.152762  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:39.187286  171911 cri.go:89] found id: ""
	I0903 23:44:39.187333  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.187344  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:39.187351  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:39.187418  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:39.222904  171911 cri.go:89] found id: ""
	I0903 23:44:39.222932  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.222940  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:39.222946  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:39.223001  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:39.256820  171911 cri.go:89] found id: ""
	I0903 23:44:39.256849  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.256860  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:39.256867  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:39.256936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:39.290701  171911 cri.go:89] found id: ""
	I0903 23:44:39.290732  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.290742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:39.290748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:39.290814  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:39.325458  171911 cri.go:89] found id: ""
	I0903 23:44:39.325494  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.325505  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:39.325513  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:39.325577  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:39.358959  171911 cri.go:89] found id: ""
	I0903 23:44:39.358988  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.358996  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:39.359002  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:39.359070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:39.394031  171911 cri.go:89] found id: ""
	I0903 23:44:39.394058  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.394066  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:39.394072  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:39.394135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:39.428921  171911 cri.go:89] found id: ""
	I0903 23:44:39.428950  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.428961  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:39.428973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:39.428992  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.478303  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:39.478346  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:39.492136  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:39.492165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:39.556474  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:39.556499  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:39.556512  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:39.630384  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:39.630421  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:42.169783  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:42.186331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:42.186392  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:42.220630  171911 cri.go:89] found id: ""
	I0903 23:44:42.220658  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.220669  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:42.220678  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:42.220751  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:42.256274  171911 cri.go:89] found id: ""
	I0903 23:44:42.256310  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.256321  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:42.256329  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:42.256387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:42.289958  171911 cri.go:89] found id: ""
	I0903 23:44:42.289988  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.289998  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:42.290006  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:42.290065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:42.322425  171911 cri.go:89] found id: ""
	I0903 23:44:42.322453  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.322464  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:42.322473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:42.322537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:42.357459  171911 cri.go:89] found id: ""
	I0903 23:44:42.357494  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.357503  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:42.357509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:42.357588  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:42.390807  171911 cri.go:89] found id: ""
	I0903 23:44:42.390837  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.390845  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:42.390851  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:42.390924  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:42.424548  171911 cri.go:89] found id: ""
	I0903 23:44:42.424579  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.424590  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:42.424598  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:42.424667  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:42.459215  171911 cri.go:89] found id: ""
	I0903 23:44:42.459250  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.459261  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:42.459274  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:42.459290  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:42.505525  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:42.505560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:42.519712  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:42.519744  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:42.583576  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:42.583603  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:42.583618  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:42.660899  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:42.660936  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.200707  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:45.217299  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:45.217372  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:45.252045  171911 cri.go:89] found id: ""
	I0903 23:44:45.252073  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.252081  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:45.252087  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:45.252155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:45.287247  171911 cri.go:89] found id: ""
	I0903 23:44:45.287281  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.287289  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:45.287296  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:45.287353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:45.320423  171911 cri.go:89] found id: ""
	I0903 23:44:45.320450  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.320457  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:45.320463  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:45.320517  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:45.353147  171911 cri.go:89] found id: ""
	I0903 23:44:45.353179  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.353187  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:45.353193  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:45.353261  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:45.387052  171911 cri.go:89] found id: ""
	I0903 23:44:45.387080  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.387089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:45.387096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:45.387151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:45.422621  171911 cri.go:89] found id: ""
	I0903 23:44:45.422651  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.422659  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:45.422666  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:45.422734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:45.457224  171911 cri.go:89] found id: ""
	I0903 23:44:45.457258  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.457266  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:45.457274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:45.457339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:45.490659  171911 cri.go:89] found id: ""
	I0903 23:44:45.490685  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.490693  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:45.490706  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:45.490729  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:45.556871  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:45.556894  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:45.556909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:45.628062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:45.628101  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.666937  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:45.666977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:45.713545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:45.713580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.227552  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:48.245044  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:48.245118  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:48.279490  171911 cri.go:89] found id: ""
	I0903 23:44:48.279519  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.279529  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:48.279537  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:48.279621  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:48.313971  171911 cri.go:89] found id: ""
	I0903 23:44:48.313998  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.314006  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:48.314012  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:48.314076  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:48.349729  171911 cri.go:89] found id: ""
	I0903 23:44:48.349765  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.349773  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:48.349779  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:48.349843  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:48.384104  171911 cri.go:89] found id: ""
	I0903 23:44:48.384132  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.384140  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:48.384147  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:48.384210  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:48.418534  171911 cri.go:89] found id: ""
	I0903 23:44:48.418569  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.418581  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:48.418589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:48.418656  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:48.452604  171911 cri.go:89] found id: ""
	I0903 23:44:48.452632  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.452640  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:48.452647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:48.452711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:48.485587  171911 cri.go:89] found id: ""
	I0903 23:44:48.485618  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.485629  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:48.485636  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:48.485701  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:48.518840  171911 cri.go:89] found id: ""
	I0903 23:44:48.518865  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.518876  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:48.518890  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:48.518906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:48.566332  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:48.566368  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.580074  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:48.580103  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:48.646139  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:48.646163  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:48.646177  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:48.721508  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:48.721551  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.261729  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:51.277615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:51.277688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:51.311728  171911 cri.go:89] found id: ""
	I0903 23:44:51.311758  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.311767  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:51.311773  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:51.311841  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:51.346364  171911 cri.go:89] found id: ""
	I0903 23:44:51.346394  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.346402  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:51.346408  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:51.346467  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:51.380196  171911 cri.go:89] found id: ""
	I0903 23:44:51.380233  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.380249  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:51.380259  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:51.380331  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:51.414829  171911 cri.go:89] found id: ""
	I0903 23:44:51.414861  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.414869  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:51.414875  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:51.414943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:51.448741  171911 cri.go:89] found id: ""
	I0903 23:44:51.448779  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.448792  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:51.448801  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:51.448865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:51.484499  171911 cri.go:89] found id: ""
	I0903 23:44:51.484537  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.484545  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:51.484552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:51.484605  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:51.518538  171911 cri.go:89] found id: ""
	I0903 23:44:51.518568  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.518580  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:51.518589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:51.518649  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:51.560124  171911 cri.go:89] found id: ""
	I0903 23:44:51.560158  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.560168  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:51.560193  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:51.560207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:51.636716  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:51.636760  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.674322  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:51.674355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:51.723819  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:51.723856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:51.737446  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:51.737478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:51.800575  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.300746  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:54.317060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:54.317135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:54.350356  171911 cri.go:89] found id: ""
	I0903 23:44:54.350382  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.350389  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:54.350396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:54.350458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:54.386548  171911 cri.go:89] found id: ""
	I0903 23:44:54.386577  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.386586  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:54.386593  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:54.386647  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:54.423360  171911 cri.go:89] found id: ""
	I0903 23:44:54.423388  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.423395  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:54.423407  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:54.423458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:54.458673  171911 cri.go:89] found id: ""
	I0903 23:44:54.458701  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.458709  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:54.458716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:54.458781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:54.491692  171911 cri.go:89] found id: ""
	I0903 23:44:54.491726  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.491738  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:54.491746  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:54.491809  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:54.524500  171911 cri.go:89] found id: ""
	I0903 23:44:54.524530  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.524543  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:54.524550  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:54.524614  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:54.558644  171911 cri.go:89] found id: ""
	I0903 23:44:54.558676  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.558688  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:54.558696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:54.558773  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:54.592814  171911 cri.go:89] found id: ""
	I0903 23:44:54.592841  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.592851  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:54.592863  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:54.592879  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:54.642538  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:54.642572  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:54.656435  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:54.656468  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:54.721260  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.721286  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:54.721304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:54.798283  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:54.798323  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:57.337294  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:57.353760  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:57.353842  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:57.387108  171911 cri.go:89] found id: ""
	I0903 23:44:57.387136  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.387146  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:57.387153  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:57.387219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:57.421245  171911 cri.go:89] found id: ""
	I0903 23:44:57.421273  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.421283  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:57.421291  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:57.421367  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:57.455403  171911 cri.go:89] found id: ""
	I0903 23:44:57.455431  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.455441  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:57.455450  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:57.455510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:57.487825  171911 cri.go:89] found id: ""
	I0903 23:44:57.487860  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.487871  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:57.487880  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:57.487935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:57.522048  171911 cri.go:89] found id: ""
	I0903 23:44:57.522073  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.522081  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:57.522087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:57.522140  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:57.555520  171911 cri.go:89] found id: ""
	I0903 23:44:57.555545  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.555553  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:57.555560  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:57.555622  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:57.588895  171911 cri.go:89] found id: ""
	I0903 23:44:57.588924  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.588933  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:57.588941  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:57.589002  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:57.623152  171911 cri.go:89] found id: ""
	I0903 23:44:57.623190  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.623198  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:57.623207  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:57.623217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:57.672898  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:57.672938  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:57.686578  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:57.686611  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:57.750436  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:57.750467  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:57.750485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:57.830779  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:57.830829  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.371014  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:00.387297  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:00.387414  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:00.420632  171911 cri.go:89] found id: ""
	I0903 23:45:00.420662  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.420670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:00.420676  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:00.420729  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:00.453824  171911 cri.go:89] found id: ""
	I0903 23:45:00.453852  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.453860  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:00.453866  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:00.453917  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:00.488618  171911 cri.go:89] found id: ""
	I0903 23:45:00.488650  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.488661  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:00.488669  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:00.488738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:00.522545  171911 cri.go:89] found id: ""
	I0903 23:45:00.522579  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.522587  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:00.522595  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:00.522655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:00.555419  171911 cri.go:89] found id: ""
	I0903 23:45:00.555445  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.555453  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:00.555459  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:00.555515  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:00.588742  171911 cri.go:89] found id: ""
	I0903 23:45:00.588777  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.588790  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:00.588799  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:00.588876  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:00.621164  171911 cri.go:89] found id: ""
	I0903 23:45:00.621194  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.621205  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:00.621212  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:00.621287  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:00.652140  171911 cri.go:89] found id: ""
	I0903 23:45:00.652167  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.652178  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:00.652191  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:00.652206  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:00.733518  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:00.733560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.770455  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:00.770489  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:00.819129  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:00.819161  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:00.832460  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:00.832492  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:00.895930  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.397643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:03.414370  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:03.414441  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:03.448753  171911 cri.go:89] found id: ""
	I0903 23:45:03.448787  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.448795  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:03.448802  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:03.448860  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:03.484668  171911 cri.go:89] found id: ""
	I0903 23:45:03.484696  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.484703  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:03.484709  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:03.484763  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:03.517157  171911 cri.go:89] found id: ""
	I0903 23:45:03.517184  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.517191  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:03.517197  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:03.517250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:03.552220  171911 cri.go:89] found id: ""
	I0903 23:45:03.552246  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.552255  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:03.552262  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:03.552328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:03.585731  171911 cri.go:89] found id: ""
	I0903 23:45:03.585764  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.585774  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:03.585783  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:03.585854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:03.619396  171911 cri.go:89] found id: ""
	I0903 23:45:03.619425  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.619433  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:03.619439  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:03.619503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:03.653461  171911 cri.go:89] found id: ""
	I0903 23:45:03.653489  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.653500  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:03.653509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:03.653562  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:03.690075  171911 cri.go:89] found id: ""
	I0903 23:45:03.690102  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.690112  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:03.690123  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:03.690139  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:03.742271  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:03.742305  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:03.755513  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:03.755548  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:03.817702  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.817734  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:03.817758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:03.894336  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:03.894377  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:06.433897  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:06.450322  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:06.450386  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:06.482782  171911 cri.go:89] found id: ""
	I0903 23:45:06.482810  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.482818  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:06.482824  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:06.482878  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:06.516065  171911 cri.go:89] found id: ""
	I0903 23:45:06.516098  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.516106  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:06.516112  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:06.516164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:06.548668  171911 cri.go:89] found id: ""
	I0903 23:45:06.548695  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.548703  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:06.548710  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:06.548765  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:06.580287  171911 cri.go:89] found id: ""
	I0903 23:45:06.580316  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.580324  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:06.580331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:06.580385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:06.613698  171911 cri.go:89] found id: ""
	I0903 23:45:06.613728  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.613736  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:06.613742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:06.613798  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:06.648492  171911 cri.go:89] found id: ""
	I0903 23:45:06.648520  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.648531  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:06.648539  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:06.648591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:06.682079  171911 cri.go:89] found id: ""
	I0903 23:45:06.682105  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.682114  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:06.682123  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:06.682182  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:06.717523  171911 cri.go:89] found id: ""
	I0903 23:45:06.717551  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.717559  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:06.717568  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:06.717580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:06.766524  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:06.766557  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:06.779931  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:06.779960  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:06.843183  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:06.843204  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:06.843217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:06.919233  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:06.919270  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.456643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:09.475777  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:09.475855  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:09.516030  171911 cri.go:89] found id: ""
	I0903 23:45:09.516066  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.516078  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:09.516086  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:09.516155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:09.556025  171911 cri.go:89] found id: ""
	I0903 23:45:09.556058  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.556071  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:09.556080  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:09.556145  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:09.596343  171911 cri.go:89] found id: ""
	I0903 23:45:09.596375  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.596384  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:09.596393  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:09.596456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:09.634286  171911 cri.go:89] found id: ""
	I0903 23:45:09.634323  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.634330  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:09.634336  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:09.634387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:09.667579  171911 cri.go:89] found id: ""
	I0903 23:45:09.667617  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.667629  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:09.667637  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:09.667709  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:09.702631  171911 cri.go:89] found id: ""
	I0903 23:45:09.702661  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.702670  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:09.702677  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:09.702744  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:09.736481  171911 cri.go:89] found id: ""
	I0903 23:45:09.736513  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.736522  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:09.736528  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:09.736594  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:09.768392  171911 cri.go:89] found id: ""
	I0903 23:45:09.768420  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.768428  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:09.768438  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:09.768454  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.804233  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:09.804262  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:09.854916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:09.854951  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:09.868290  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:09.868326  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:09.937659  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:09.937686  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:09.937702  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:12.515352  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:12.532069  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:12.532138  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:12.566307  171911 cri.go:89] found id: ""
	I0903 23:45:12.566347  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.566356  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:12.566361  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:12.566413  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:12.600883  171911 cri.go:89] found id: ""
	I0903 23:45:12.600911  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.600919  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:12.600925  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:12.600976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:12.634831  171911 cri.go:89] found id: ""
	I0903 23:45:12.634860  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.634868  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:12.634874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:12.634932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:12.668965  171911 cri.go:89] found id: ""
	I0903 23:45:12.668993  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.669002  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:12.669008  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:12.669061  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:12.702632  171911 cri.go:89] found id: ""
	I0903 23:45:12.702662  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.702670  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:12.702676  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:12.702734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:12.736957  171911 cri.go:89] found id: ""
	I0903 23:45:12.736994  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.737005  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:12.737013  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:12.737096  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:12.769324  171911 cri.go:89] found id: ""
	I0903 23:45:12.769353  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.769361  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:12.769367  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:12.769433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:12.801706  171911 cri.go:89] found id: ""
	I0903 23:45:12.801731  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.801738  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:12.801747  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:12.801758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:12.850449  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:12.850485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:12.864235  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:12.864263  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:12.928347  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:12.928372  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:12.928385  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:13.002530  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:13.002569  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:15.541753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:15.558031  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:15.558098  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:15.590544  171911 cri.go:89] found id: ""
	I0903 23:45:15.590590  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.590608  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:15.590618  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:15.590681  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:15.623172  171911 cri.go:89] found id: ""
	I0903 23:45:15.623206  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.623214  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:15.623220  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:15.623271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:15.666374  171911 cri.go:89] found id: ""
	I0903 23:45:15.666413  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.666424  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:15.666432  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:15.666500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:15.700153  171911 cri.go:89] found id: ""
	I0903 23:45:15.700188  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.700196  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:15.700203  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:15.700258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:15.734346  171911 cri.go:89] found id: ""
	I0903 23:45:15.734379  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.734391  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:15.734401  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:15.734468  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:15.768125  171911 cri.go:89] found id: ""
	I0903 23:45:15.768151  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.768160  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:15.768166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:15.768219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:15.802055  171911 cri.go:89] found id: ""
	I0903 23:45:15.802085  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.802093  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:15.802101  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:15.802155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:15.835742  171911 cri.go:89] found id: ""
	I0903 23:45:15.835775  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.835785  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:15.835796  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:15.835809  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:15.887302  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:15.887339  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:15.900589  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:15.900616  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:15.963821  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:15.963850  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:15.963867  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:16.041873  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:16.041910  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:18.579975  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:18.596552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:18.596644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:18.637122  171911 cri.go:89] found id: ""
	I0903 23:45:18.637150  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.637159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:18.637168  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:18.637231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:18.683926  171911 cri.go:89] found id: ""
	I0903 23:45:18.683965  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.683976  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:18.683984  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:18.684143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:18.724297  171911 cri.go:89] found id: ""
	I0903 23:45:18.724326  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.724337  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:18.724356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:18.724424  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:18.767543  171911 cri.go:89] found id: ""
	I0903 23:45:18.767585  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.767594  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:18.767601  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:18.767666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:18.808984  171911 cri.go:89] found id: ""
	I0903 23:45:18.809023  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.809034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:18.809042  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:18.809125  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:18.843616  171911 cri.go:89] found id: ""
	I0903 23:45:18.843651  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.843662  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:18.843670  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:18.843772  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:18.878089  171911 cri.go:89] found id: ""
	I0903 23:45:18.878117  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.878125  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:18.878131  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:18.878199  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:18.913557  171911 cri.go:89] found id: ""
	I0903 23:45:18.913590  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.913602  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:18.913613  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:18.913629  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:18.964473  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:18.964511  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:18.977841  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:18.977868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:19.041151  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:19.041175  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:19.041190  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:19.114112  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:19.114166  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:21.655099  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:21.671751  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:21.671826  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:21.705950  171911 cri.go:89] found id: ""
	I0903 23:45:21.705985  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.705993  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:21.706000  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:21.706066  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:21.745098  171911 cri.go:89] found id: ""
	I0903 23:45:21.745125  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.745134  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:21.745139  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:21.745212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:21.787214  171911 cri.go:89] found id: ""
	I0903 23:45:21.787246  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.787259  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:21.787267  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:21.787340  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:21.825966  171911 cri.go:89] found id: ""
	I0903 23:45:21.825999  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.826009  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:21.826023  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:21.826094  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:21.858874  171911 cri.go:89] found id: ""
	I0903 23:45:21.858909  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.858920  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:21.858928  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:21.858990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:21.892820  171911 cri.go:89] found id: ""
	I0903 23:45:21.892851  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.892862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:21.892869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:21.892938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:21.927139  171911 cri.go:89] found id: ""
	I0903 23:45:21.927167  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.927174  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:21.927180  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:21.927242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:21.961202  171911 cri.go:89] found id: ""
	I0903 23:45:21.961235  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.961247  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:21.961259  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:21.961274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:22.034253  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:22.034307  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:22.081973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:22.082014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:22.136441  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:22.136507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:22.153988  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:22.154027  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:22.218718  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:24.718932  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:24.735304  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:24.735366  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:24.769484  171911 cri.go:89] found id: ""
	I0903 23:45:24.769526  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.769534  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:24.769541  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:24.769602  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:24.804478  171911 cri.go:89] found id: ""
	I0903 23:45:24.804512  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.804523  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:24.804531  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:24.804616  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:24.839941  171911 cri.go:89] found id: ""
	I0903 23:45:24.839967  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.839974  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:24.839980  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:24.840043  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:24.872589  171911 cri.go:89] found id: ""
	I0903 23:45:24.872631  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.872641  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:24.872650  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:24.872713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:24.906281  171911 cri.go:89] found id: ""
	I0903 23:45:24.906312  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.906321  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:24.906327  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:24.906381  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:24.940855  171911 cri.go:89] found id: ""
	I0903 23:45:24.940891  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.940902  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:24.940910  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:24.940979  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:24.973046  171911 cri.go:89] found id: ""
	I0903 23:45:24.973075  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.973084  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:24.973091  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:24.973160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:25.006986  171911 cri.go:89] found id: ""
	I0903 23:45:25.007015  171911 logs.go:282] 0 containers: []
	W0903 23:45:25.007026  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:25.007038  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:25.007054  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:25.057037  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:25.057075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:25.070713  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:25.070741  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:25.135104  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:25.135129  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:25.135142  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:25.211776  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:25.211816  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:27.750263  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:27.766962  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:27.767039  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:27.809102  171911 cri.go:89] found id: ""
	I0903 23:45:27.809134  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.809142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:27.809149  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:27.809201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:27.852918  171911 cri.go:89] found id: ""
	I0903 23:45:27.852946  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.852954  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:27.852961  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:27.853025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:27.908523  171911 cri.go:89] found id: ""
	I0903 23:45:27.908554  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.908561  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:27.908566  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:27.908627  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:27.941105  171911 cri.go:89] found id: ""
	I0903 23:45:27.941136  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.941144  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:27.941150  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:27.941204  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:27.974030  171911 cri.go:89] found id: ""
	I0903 23:45:27.974064  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.974075  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:27.974082  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:27.974149  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:28.007829  171911 cri.go:89] found id: ""
	I0903 23:45:28.007857  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.007867  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:28.007874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:28.007936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:28.050575  171911 cri.go:89] found id: ""
	I0903 23:45:28.050614  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.050622  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:28.050629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:28.050684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:28.085777  171911 cri.go:89] found id: ""
	I0903 23:45:28.085809  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.085817  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:28.085826  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:28.085838  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:28.150751  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:28.150778  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:28.150792  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:28.223955  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:28.224000  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:28.262972  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:28.262999  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:28.311545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:28.311580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:30.827970  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:30.844742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:30.844805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:30.880412  171911 cri.go:89] found id: ""
	I0903 23:45:30.880453  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.880468  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:30.880476  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:30.880549  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:30.913830  171911 cri.go:89] found id: ""
	I0903 23:45:30.913858  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.913867  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:30.913872  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:30.913935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:30.946611  171911 cri.go:89] found id: ""
	I0903 23:45:30.946641  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.946650  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:30.946656  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:30.946711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:30.980152  171911 cri.go:89] found id: ""
	I0903 23:45:30.980183  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.980193  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:30.980201  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:30.980271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:31.015814  171911 cri.go:89] found id: ""
	I0903 23:45:31.015845  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.015856  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:31.015863  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:31.015932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:31.050513  171911 cri.go:89] found id: ""
	I0903 23:45:31.050543  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.050555  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:31.050562  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:31.050636  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:31.083766  171911 cri.go:89] found id: ""
	I0903 23:45:31.083791  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.083798  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:31.083805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:31.083864  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:31.117858  171911 cri.go:89] found id: ""
	I0903 23:45:31.117886  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.117893  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:31.117903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:31.117922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:31.131404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:31.131433  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:31.195245  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:31.195275  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:31.195295  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:31.271630  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:31.271671  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:31.310746  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:31.310780  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:33.861848  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:33.878672  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:33.878742  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:33.911344  171911 cri.go:89] found id: ""
	I0903 23:45:33.911377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.911388  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:33.911396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:33.911458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:33.948348  171911 cri.go:89] found id: ""
	I0903 23:45:33.948377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.948385  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:33.948391  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:33.948455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:33.981680  171911 cri.go:89] found id: ""
	I0903 23:45:33.981710  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.981722  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:33.981730  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:33.981796  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:34.013721  171911 cri.go:89] found id: ""
	I0903 23:45:34.013747  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.013755  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:34.013762  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:34.013827  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:34.047612  171911 cri.go:89] found id: ""
	I0903 23:45:34.047644  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.047654  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:34.047661  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:34.047720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:34.081680  171911 cri.go:89] found id: ""
	I0903 23:45:34.081714  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.081725  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:34.081734  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:34.081802  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:34.117208  171911 cri.go:89] found id: ""
	I0903 23:45:34.117247  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.117258  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:34.117268  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:34.117339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:34.150598  171911 cri.go:89] found id: ""
	I0903 23:45:34.150626  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.150634  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:34.150644  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:34.150655  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:34.199612  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:34.199652  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:34.213484  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:34.213513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:34.276337  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:34.276358  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:34.276380  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:34.347780  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:34.347822  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:36.885583  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:36.902360  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:36.902439  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:36.936103  171911 cri.go:89] found id: ""
	I0903 23:45:36.936133  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.936142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:36.936148  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:36.936212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:36.969146  171911 cri.go:89] found id: ""
	I0903 23:45:36.969173  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.969180  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:36.969186  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:36.969248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:37.002284  171911 cri.go:89] found id: ""
	I0903 23:45:37.002314  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.002324  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:37.002331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:37.002385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:37.034701  171911 cri.go:89] found id: ""
	I0903 23:45:37.034731  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.034741  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:37.034749  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:37.034815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:37.067766  171911 cri.go:89] found id: ""
	I0903 23:45:37.067798  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.067810  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:37.067819  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:37.067887  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:37.100402  171911 cri.go:89] found id: ""
	I0903 23:45:37.100431  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.100439  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:37.100445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:37.100495  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:37.134783  171911 cri.go:89] found id: ""
	I0903 23:45:37.134814  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.134822  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:37.134828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:37.134892  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:37.168715  171911 cri.go:89] found id: ""
	I0903 23:45:37.168746  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.168753  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:37.168768  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:37.168781  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:37.239216  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:37.239259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:37.278941  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:37.278977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:37.327168  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:37.327207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:37.340806  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:37.340837  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:37.402460  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:39.902717  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:39.919140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:39.919211  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:39.952379  171911 cri.go:89] found id: ""
	I0903 23:45:39.952407  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.952421  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:39.952428  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:39.952510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:39.986646  171911 cri.go:89] found id: ""
	I0903 23:45:39.986674  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.986682  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:39.986688  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:39.986750  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:40.019946  171911 cri.go:89] found id: ""
	I0903 23:45:40.019984  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.019995  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:40.020004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:40.020075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:40.051084  171911 cri.go:89] found id: ""
	I0903 23:45:40.051120  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.051131  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:40.051139  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:40.051198  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:40.084431  171911 cri.go:89] found id: ""
	I0903 23:45:40.084471  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.084485  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:40.084493  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:40.084590  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:40.117261  171911 cri.go:89] found id: ""
	I0903 23:45:40.117289  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.117298  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:40.117305  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:40.117356  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:40.149940  171911 cri.go:89] found id: ""
	I0903 23:45:40.149976  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.149983  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:40.149989  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:40.150049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:40.185787  171911 cri.go:89] found id: ""
	I0903 23:45:40.185819  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.185828  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:40.185838  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:40.185849  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:40.236114  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:40.236151  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:40.249810  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:40.249842  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:40.315354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:40.315385  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:40.315402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:40.391973  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:40.392014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:42.929523  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:42.946789  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:42.946852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:42.981168  171911 cri.go:89] found id: ""
	I0903 23:45:42.981202  171911 logs.go:282] 0 containers: []
	W0903 23:45:42.981214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:42.981223  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:42.981290  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:43.016160  171911 cri.go:89] found id: ""
	I0903 23:45:43.016191  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.016202  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:43.016210  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:43.016277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:43.052374  171911 cri.go:89] found id: ""
	I0903 23:45:43.052407  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.052415  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:43.052421  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:43.052490  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:43.087466  171911 cri.go:89] found id: ""
	I0903 23:45:43.087492  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.087499  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:43.087506  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:43.087578  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:43.121733  171911 cri.go:89] found id: ""
	I0903 23:45:43.121770  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.121780  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:43.121786  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:43.121852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:43.155089  171911 cri.go:89] found id: ""
	I0903 23:45:43.155120  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.155129  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:43.155136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:43.155208  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:43.187081  171911 cri.go:89] found id: ""
	I0903 23:45:43.187113  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.187124  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:43.187132  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:43.187206  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:43.221988  171911 cri.go:89] found id: ""
	I0903 23:45:43.222020  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.222027  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:43.222037  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:43.222048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:43.274015  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:43.274053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:43.288204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:43.288237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:43.352172  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:43.352197  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:43.352214  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:43.429363  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:43.429416  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:45.967138  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:45.984430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:45.984508  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:46.018620  171911 cri.go:89] found id: ""
	I0903 23:45:46.018656  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.018670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:46.018680  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:46.018736  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:46.052857  171911 cri.go:89] found id: ""
	I0903 23:45:46.052896  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.052908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:46.052917  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:46.052992  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:46.086760  171911 cri.go:89] found id: ""
	I0903 23:45:46.086802  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.086815  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:46.086824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:46.086897  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:46.122770  171911 cri.go:89] found id: ""
	I0903 23:45:46.122808  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.122821  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:46.122831  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:46.122898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:46.156632  171911 cri.go:89] found id: ""
	I0903 23:45:46.156666  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.156677  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:46.156684  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:46.156748  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:46.189167  171911 cri.go:89] found id: ""
	I0903 23:45:46.189196  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.189204  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:46.189211  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:46.189281  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:46.221676  171911 cri.go:89] found id: ""
	I0903 23:45:46.221703  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.221710  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:46.221716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:46.221781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:46.255950  171911 cri.go:89] found id: ""
	I0903 23:45:46.255989  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.256001  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:46.256012  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:46.256026  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:46.320856  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:46.320887  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:46.320904  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:46.395448  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:46.395495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:46.433348  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:46.433402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:46.483558  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:46.483600  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:48.997604  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:49.014515  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:49.014584  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:49.049009  171911 cri.go:89] found id: ""
	I0903 23:45:49.049041  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.049049  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:49.049055  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:49.049107  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:49.082752  171911 cri.go:89] found id: ""
	I0903 23:45:49.082784  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.082792  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:49.082799  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:49.082853  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:49.117820  171911 cri.go:89] found id: ""
	I0903 23:45:49.117851  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.117861  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:49.117869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:49.117937  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:49.152630  171911 cri.go:89] found id: ""
	I0903 23:45:49.152662  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.152673  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:49.152681  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:49.152746  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:49.186660  171911 cri.go:89] found id: ""
	I0903 23:45:49.186693  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.186705  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:49.186715  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:49.186787  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:49.221850  171911 cri.go:89] found id: ""
	I0903 23:45:49.221879  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.221887  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:49.221894  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:49.221947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:49.256272  171911 cri.go:89] found id: ""
	I0903 23:45:49.256301  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.256309  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:49.256315  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:49.256378  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:49.292385  171911 cri.go:89] found id: ""
	I0903 23:45:49.292414  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.292422  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:49.292432  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:49.292446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:49.343070  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:49.343109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:49.356910  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:49.356940  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:49.423437  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:49.423471  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:49.423486  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:49.494062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:49.494108  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.034573  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:52.051154  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:52.051217  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:52.088178  171911 cri.go:89] found id: ""
	I0903 23:45:52.088205  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.088214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:52.088222  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:52.088284  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:52.122560  171911 cri.go:89] found id: ""
	I0903 23:45:52.122595  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.122606  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:52.122617  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:52.122687  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:52.154593  171911 cri.go:89] found id: ""
	I0903 23:45:52.154628  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.154636  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:52.154646  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:52.154700  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:52.188028  171911 cri.go:89] found id: ""
	I0903 23:45:52.188066  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.188079  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:52.188088  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:52.188162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:52.223140  171911 cri.go:89] found id: ""
	I0903 23:45:52.223165  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.223172  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:52.223178  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:52.223231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:52.267817  171911 cri.go:89] found id: ""
	I0903 23:45:52.267851  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.267862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:52.267869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:52.267936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:52.302187  171911 cri.go:89] found id: ""
	I0903 23:45:52.302224  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.302236  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:52.302245  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:52.302315  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:52.336716  171911 cri.go:89] found id: ""
	I0903 23:45:52.336742  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.336750  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:52.336761  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:52.336776  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.376759  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:52.376793  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:52.424230  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:52.424274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:52.438819  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:52.438850  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:52.505537  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:52.505562  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:52.505577  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.082568  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:55.100018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:55.100095  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:55.135160  171911 cri.go:89] found id: ""
	I0903 23:45:55.135189  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.135201  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:55.135210  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:55.135268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:55.175763  171911 cri.go:89] found id: ""
	I0903 23:45:55.175800  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.175808  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:55.175814  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:55.175875  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:55.209987  171911 cri.go:89] found id: ""
	I0903 23:45:55.210015  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.210024  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:55.210030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:55.210090  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:55.244587  171911 cri.go:89] found id: ""
	I0903 23:45:55.244615  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.244623  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:55.244630  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:55.244699  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:55.279333  171911 cri.go:89] found id: ""
	I0903 23:45:55.279363  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.279373  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:55.279381  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:55.279451  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:55.313220  171911 cri.go:89] found id: ""
	I0903 23:45:55.313263  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.313273  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:55.313281  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:55.313355  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:55.348181  171911 cri.go:89] found id: ""
	I0903 23:45:55.348215  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.348224  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:55.348230  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:55.348299  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:55.381456  171911 cri.go:89] found id: ""
	I0903 23:45:55.381482  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.381490  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:55.381500  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:55.381516  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:55.433817  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:55.433856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:55.447772  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:55.447804  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:55.513762  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:55.513795  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:55.513812  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.585576  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:55.585615  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.125483  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:58.142430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:58.142505  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:58.177668  171911 cri.go:89] found id: ""
	I0903 23:45:58.177697  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.177709  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:58.177717  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:58.177791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:58.212662  171911 cri.go:89] found id: ""
	I0903 23:45:58.212688  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.212697  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:58.212705  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:58.212766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:58.248588  171911 cri.go:89] found id: ""
	I0903 23:45:58.248616  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.248623  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:58.248629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:58.248684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:58.283427  171911 cri.go:89] found id: ""
	I0903 23:45:58.283459  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.283468  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:58.283475  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:58.283537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:58.319164  171911 cri.go:89] found id: ""
	I0903 23:45:58.319195  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.319203  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:58.319209  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:58.319265  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:58.354722  171911 cri.go:89] found id: ""
	I0903 23:45:58.354750  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.354758  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:58.354764  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:58.354816  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:58.389144  171911 cri.go:89] found id: ""
	I0903 23:45:58.389171  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.389181  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:58.389187  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:58.389240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:58.423096  171911 cri.go:89] found id: ""
	I0903 23:45:58.423125  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.423134  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:58.423144  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:58.423158  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:58.500171  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:58.500208  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.538635  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:58.538663  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:58.584846  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:58.584882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:58.598653  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:58.598685  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:58.666401  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:01.168834  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:01.185866  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:01.185953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:01.219970  171911 cri.go:89] found id: ""
	I0903 23:46:01.219998  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.220006  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:01.220012  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:01.220075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:01.253640  171911 cri.go:89] found id: ""
	I0903 23:46:01.253673  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.253683  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:01.253691  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:01.253756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:01.288533  171911 cri.go:89] found id: ""
	I0903 23:46:01.288564  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.288576  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:01.288584  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:01.288655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:01.323184  171911 cri.go:89] found id: ""
	I0903 23:46:01.323217  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.323226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:01.323232  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:01.323289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:01.356988  171911 cri.go:89] found id: ""
	I0903 23:46:01.357023  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.357034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:01.357045  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:01.357106  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:01.390140  171911 cri.go:89] found id: ""
	I0903 23:46:01.390168  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.390176  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:01.390182  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:01.390247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:01.423178  171911 cri.go:89] found id: ""
	I0903 23:46:01.423207  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.423215  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:01.423222  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:01.423285  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:01.461100  171911 cri.go:89] found id: ""
	I0903 23:46:01.461138  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.461148  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:01.461160  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:01.461185  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:01.535231  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:01.535274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:01.574120  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:01.574154  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:01.621782  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:01.621817  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:01.642205  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:01.642246  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:01.707505  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.207758  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:04.225090  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:04.225162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:04.259542  171911 cri.go:89] found id: ""
	I0903 23:46:04.259573  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.259580  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:04.259586  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:04.259638  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:04.294395  171911 cri.go:89] found id: ""
	I0903 23:46:04.294422  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.294430  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:04.294436  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:04.294488  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:04.329086  171911 cri.go:89] found id: ""
	I0903 23:46:04.329125  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.329134  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:04.329140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:04.329194  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:04.362247  171911 cri.go:89] found id: ""
	I0903 23:46:04.362278  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.362286  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:04.362292  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:04.362348  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:04.397700  171911 cri.go:89] found id: ""
	I0903 23:46:04.397731  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.397739  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:04.397745  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:04.397800  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:04.431332  171911 cri.go:89] found id: ""
	I0903 23:46:04.431360  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.431368  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:04.431374  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:04.431425  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:04.465005  171911 cri.go:89] found id: ""
	I0903 23:46:04.465035  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.465042  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:04.465049  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:04.465108  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:04.500441  171911 cri.go:89] found id: ""
	I0903 23:46:04.500470  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.500478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:04.500487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:04.500505  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:04.538356  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:04.538389  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:04.585363  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:04.585412  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:04.602519  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:04.602553  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:04.676451  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.676474  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:04.676488  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.260862  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:07.278149  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:07.278214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:07.320356  171911 cri.go:89] found id: ""
	I0903 23:46:07.320393  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.320405  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:07.320412  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:07.320498  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:07.355032  171911 cri.go:89] found id: ""
	I0903 23:46:07.355063  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.355074  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:07.355090  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:07.355155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:07.391094  171911 cri.go:89] found id: ""
	I0903 23:46:07.391119  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.391129  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:07.391136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:07.391195  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:07.431946  171911 cri.go:89] found id: ""
	I0903 23:46:07.431979  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.431988  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:07.431994  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:07.432049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:07.470935  171911 cri.go:89] found id: ""
	I0903 23:46:07.470965  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.470974  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:07.470981  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:07.471035  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:07.507140  171911 cri.go:89] found id: ""
	I0903 23:46:07.507171  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.507179  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:07.507185  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:07.507243  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:07.542978  171911 cri.go:89] found id: ""
	I0903 23:46:07.543007  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.543014  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:07.543022  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:07.543083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:07.578836  171911 cri.go:89] found id: ""
	I0903 23:46:07.578867  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.578875  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:07.578885  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:07.578911  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:07.625808  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:07.625852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:07.639685  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:07.639719  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:07.705947  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:07.705975  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:07.705994  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.782360  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:07.782406  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:10.331295  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:10.348405  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:10.348479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:10.381149  171911 cri.go:89] found id: ""
	I0903 23:46:10.381178  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.381185  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:10.381192  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:10.381254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:10.414056  171911 cri.go:89] found id: ""
	I0903 23:46:10.414096  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.414108  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:10.414117  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:10.414174  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:10.449437  171911 cri.go:89] found id: ""
	I0903 23:46:10.449467  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.449478  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:10.449485  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:10.449568  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:10.485019  171911 cri.go:89] found id: ""
	I0903 23:46:10.485047  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.485058  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:10.485064  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:10.485115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:10.517909  171911 cri.go:89] found id: ""
	I0903 23:46:10.517943  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.517955  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:10.517963  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:10.518037  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:10.551948  171911 cri.go:89] found id: ""
	I0903 23:46:10.551976  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.551984  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:10.551990  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:10.552053  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:10.586008  171911 cri.go:89] found id: ""
	I0903 23:46:10.586042  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.586052  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:10.586060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:10.586130  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:10.621028  171911 cri.go:89] found id: ""
	I0903 23:46:10.621054  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.621062  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:10.621073  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:10.621122  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:10.670328  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:10.670367  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:10.684168  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:10.684196  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:10.750643  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:10.750664  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:10.750678  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:10.824493  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:10.824545  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.375299  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:13.392043  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:13.392129  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:13.427112  171911 cri.go:89] found id: ""
	I0903 23:46:13.427149  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.427159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:13.427167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:13.427240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:13.462866  171911 cri.go:89] found id: ""
	I0903 23:46:13.462900  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.462908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:13.462915  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:13.462976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:13.498341  171911 cri.go:89] found id: ""
	I0903 23:46:13.498372  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.498381  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:13.498387  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:13.498440  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:13.543600  171911 cri.go:89] found id: ""
	I0903 23:46:13.543627  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.543636  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:13.543642  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:13.543696  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:13.578615  171911 cri.go:89] found id: ""
	I0903 23:46:13.578643  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.578651  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:13.578657  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:13.578720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:13.613164  171911 cri.go:89] found id: ""
	I0903 23:46:13.613190  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.613197  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:13.613204  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:13.613268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:13.648193  171911 cri.go:89] found id: ""
	I0903 23:46:13.648219  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.648227  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:13.648235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:13.648289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:13.692585  171911 cri.go:89] found id: ""
	I0903 23:46:13.692611  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.692619  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:13.692630  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:13.692649  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:13.709447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:13.709475  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:13.787419  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:13.787450  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:13.787466  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:13.876087  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:13.876121  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.922854  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:13.922882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.471424  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:16.489172  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:16.489260  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:16.523832  171911 cri.go:89] found id: ""
	I0903 23:46:16.523860  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.523867  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:16.523884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:16.523938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:16.561012  171911 cri.go:89] found id: ""
	I0903 23:46:16.561043  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.561051  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:16.561057  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:16.561112  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:16.595123  171911 cri.go:89] found id: ""
	I0903 23:46:16.595149  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.595156  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:16.595161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:16.595214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:16.629844  171911 cri.go:89] found id: ""
	I0903 23:46:16.629879  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.629887  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:16.629893  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:16.629946  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:16.665052  171911 cri.go:89] found id: ""
	I0903 23:46:16.665081  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.665089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:16.665103  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:16.665176  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:16.699559  171911 cri.go:89] found id: ""
	I0903 23:46:16.699591  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.699599  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:16.699607  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:16.699670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:16.734191  171911 cri.go:89] found id: ""
	I0903 23:46:16.734221  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.734229  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:16.734235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:16.734328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:16.770088  171911 cri.go:89] found id: ""
	I0903 23:46:16.770117  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.770125  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:16.770135  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:16.770150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.818779  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:16.818821  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:16.833000  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:16.833028  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:16.896259  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:16.896283  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:16.896301  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:16.973287  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:16.973330  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:19.513618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:19.533892  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:19.533986  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:19.575679  171911 cri.go:89] found id: ""
	I0903 23:46:19.575712  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.575722  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:19.575731  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:19.575803  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:19.623477  171911 cri.go:89] found id: ""
	I0903 23:46:19.623509  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.623517  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:19.623524  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:19.623592  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:19.663676  171911 cri.go:89] found id: ""
	I0903 23:46:19.663709  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.663718  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:19.663725  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:19.663792  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:19.698413  171911 cri.go:89] found id: ""
	I0903 23:46:19.698457  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.698466  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:19.698473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:19.698545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:19.734009  171911 cri.go:89] found id: ""
	I0903 23:46:19.734043  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.734051  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:19.734057  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:19.734124  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:19.770645  171911 cri.go:89] found id: ""
	I0903 23:46:19.770674  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.770682  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:19.770688  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:19.770749  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:19.805002  171911 cri.go:89] found id: ""
	I0903 23:46:19.805039  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.805051  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:19.805062  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:19.805134  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:19.839613  171911 cri.go:89] found id: ""
	I0903 23:46:19.839649  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.839659  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:19.839672  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:19.839687  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:19.892825  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:19.892868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:19.907172  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:19.907215  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:19.972520  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:19.972549  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:19.972563  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:20.047246  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:20.047313  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:22.586936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:22.603850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:22.603927  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:22.638907  171911 cri.go:89] found id: ""
	I0903 23:46:22.638936  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.638945  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:22.638954  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:22.639025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:22.674519  171911 cri.go:89] found id: ""
	I0903 23:46:22.674550  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.674557  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:22.674563  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:22.674623  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:22.709223  171911 cri.go:89] found id: ""
	I0903 23:46:22.709256  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.709267  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:22.709274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:22.709343  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:22.744699  171911 cri.go:89] found id: ""
	I0903 23:46:22.744732  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.744742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:22.744748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:22.744801  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:22.780192  171911 cri.go:89] found id: ""
	I0903 23:46:22.780226  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.780234  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:22.780240  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:22.780296  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:22.814575  171911 cri.go:89] found id: ""
	I0903 23:46:22.814606  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.814615  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:22.814621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:22.814674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:22.851385  171911 cri.go:89] found id: ""
	I0903 23:46:22.851415  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.851423  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:22.851429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:22.851480  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:22.884676  171911 cri.go:89] found id: ""
	I0903 23:46:22.884705  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.884713  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:22.884723  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:22.884734  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:22.935185  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:22.935223  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:22.949406  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:22.949442  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:23.012847  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:23.012877  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:23.012895  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:23.084409  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:23.084455  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:25.631753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:25.651358  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:25.651431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:25.685485  171911 cri.go:89] found id: ""
	I0903 23:46:25.685514  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.685523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:25.685528  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:25.685591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:25.720765  171911 cri.go:89] found id: ""
	I0903 23:46:25.720796  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.720804  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:25.720810  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:25.720867  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:25.754626  171911 cri.go:89] found id: ""
	I0903 23:46:25.754659  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.754670  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:25.754678  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:25.754731  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:25.789362  171911 cri.go:89] found id: ""
	I0903 23:46:25.789411  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.789421  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:25.789429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:25.789497  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:25.826469  171911 cri.go:89] found id: ""
	I0903 23:46:25.826502  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.826511  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:25.826519  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:25.826582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:25.861006  171911 cri.go:89] found id: ""
	I0903 23:46:25.861045  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.861057  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:25.861066  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:25.861141  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:25.895640  171911 cri.go:89] found id: ""
	I0903 23:46:25.895676  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.895687  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:25.895696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:25.895766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:25.930858  171911 cri.go:89] found id: ""
	I0903 23:46:25.930886  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.930894  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:25.930903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:25.930917  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:25.945023  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:25.945048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:26.011367  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:26.011401  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:26.011419  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:26.088648  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:26.088697  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:26.127560  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:26.127595  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:28.679659  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:28.696950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:28.697030  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:28.730995  171911 cri.go:89] found id: ""
	I0903 23:46:28.731026  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.731039  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:28.731047  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:28.731121  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:28.765348  171911 cri.go:89] found id: ""
	I0903 23:46:28.765377  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.765396  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:28.765404  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:28.765471  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:28.801427  171911 cri.go:89] found id: ""
	I0903 23:46:28.801459  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.801470  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:28.801478  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:28.801545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:28.836740  171911 cri.go:89] found id: ""
	I0903 23:46:28.836766  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.836775  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:28.836781  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:28.836865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:28.872484  171911 cri.go:89] found id: ""
	I0903 23:46:28.872517  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.872528  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:28.872538  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:28.872619  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:28.906796  171911 cri.go:89] found id: ""
	I0903 23:46:28.906840  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.906854  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:28.906864  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:28.906936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:28.941330  171911 cri.go:89] found id: ""
	I0903 23:46:28.941359  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.941367  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:28.941373  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:28.941447  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:28.975273  171911 cri.go:89] found id: ""
	I0903 23:46:28.975304  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.975316  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:28.975328  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:28.975351  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:29.013344  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:29.013374  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:29.062906  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:29.062943  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:29.077068  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:29.077094  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:29.141017  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:29.141041  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:29.141059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:31.720110  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:31.737478  171911 kubeadm.go:593] duration metric: took 4m4.418875365s to restartPrimaryControlPlane
	W0903 23:46:31.737562  171911 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0903 23:46:31.737592  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:46:36.182110  171911 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.444484741s)
	I0903 23:46:36.182205  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:46:36.197763  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:46:36.209295  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:46:36.220561  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:46:36.220584  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:46:36.220630  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:46:36.231194  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:46:36.231261  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:46:36.242263  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:46:36.252204  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:46:36.252278  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:46:36.263654  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.274160  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:46:36.274216  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.285535  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:46:36.296495  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:46:36.296566  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:46:36.308036  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:46:36.376723  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:46:36.376807  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:46:36.507237  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:46:36.507356  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:46:36.507451  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:46:36.676775  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:46:36.678771  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:46:36.678910  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:46:36.679002  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:46:36.679121  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:46:36.679204  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:46:36.679317  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:46:36.679385  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:46:36.679592  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:46:36.680075  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:46:36.680443  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:46:36.680690  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:46:36.680741  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:46:36.680801  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:46:37.040729  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:46:37.327107  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:46:37.592932  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:46:37.842405  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:46:37.860457  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:46:37.861477  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:46:37.861541  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:46:38.009088  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:46:38.010918  171911 out.go:252]   - Booting up control plane ...
	I0903 23:46:38.011062  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:46:38.018027  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:46:38.018106  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:46:38.018634  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:46:38.023296  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:47:18.025738  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:47:18.026296  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:18.026552  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:23.027174  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:23.027478  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:33.028031  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:33.028314  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:53.028650  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:53.028911  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031053  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:48:33.031367  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031406  171911 kubeadm.go:310] 
	I0903 23:48:33.031457  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:48:33.031522  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:48:33.031531  171911 kubeadm.go:310] 
	I0903 23:48:33.031571  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:48:33.031621  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:48:33.031747  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:48:33.031758  171911 kubeadm.go:310] 
	I0903 23:48:33.031898  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:48:33.031946  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:48:33.032002  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:48:33.032011  171911 kubeadm.go:310] 
	I0903 23:48:33.032171  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:48:33.032298  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:48:33.032308  171911 kubeadm.go:310] 
	I0903 23:48:33.032463  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:48:33.032612  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:48:33.032693  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:48:33.032780  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:48:33.032797  171911 kubeadm.go:310] 
	I0903 23:48:33.033539  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:48:33.033643  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:48:33.033735  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0903 23:48:33.033908  171911 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:48:33.033966  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:48:33.484811  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:48:33.501986  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:48:33.513610  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:48:33.513635  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:48:33.513694  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:48:33.524062  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:48:33.524128  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:48:33.534922  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:48:33.544314  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:48:33.544364  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:48:33.555345  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.565515  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:48:33.565578  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.576111  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:48:33.586276  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:48:33.586335  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:48:33.597298  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:48:33.791164  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:50:29.735983  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:50:29.736108  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:50:29.738473  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:50:29.738539  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:50:29.738632  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:50:29.738777  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:50:29.738908  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:50:29.738994  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:50:29.740823  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:50:29.740897  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:50:29.740956  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:50:29.741026  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:50:29.741099  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:50:29.741175  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:50:29.741225  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:50:29.741281  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:50:29.741336  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:50:29.741423  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:50:29.741518  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:50:29.741593  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:50:29.741669  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:50:29.741746  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:50:29.741831  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:50:29.741921  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:50:29.742004  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:50:29.742142  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:50:29.742267  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:50:29.742339  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:50:29.742442  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:50:29.744016  171911 out.go:252]   - Booting up control plane ...
	I0903 23:50:29.744169  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:50:29.744283  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:50:29.744364  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:50:29.744481  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:50:29.744722  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:50:29.744772  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:50:29.744856  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745144  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745256  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745481  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745588  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745791  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745882  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746079  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746151  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746327  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746336  171911 kubeadm.go:310] 
	I0903 23:50:29.746385  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:50:29.746439  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:50:29.746449  171911 kubeadm.go:310] 
	I0903 23:50:29.746505  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:50:29.746554  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:50:29.746678  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:50:29.746686  171911 kubeadm.go:310] 
	I0903 23:50:29.746808  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:50:29.746856  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:50:29.746908  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:50:29.746918  171911 kubeadm.go:310] 
	I0903 23:50:29.747078  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:50:29.747201  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:50:29.747208  171911 kubeadm.go:310] 
	I0903 23:50:29.747368  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:50:29.747487  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:50:29.747603  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:50:29.747684  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:50:29.747736  171911 kubeadm.go:310] 
	I0903 23:50:29.747765  171911 kubeadm.go:394] duration metric: took 8m2.477240692s to StartCluster
	I0903 23:50:29.747828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:50:29.747896  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:50:29.786098  171911 cri.go:89] found id: ""
	I0903 23:50:29.786144  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.786162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:50:29.786169  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:50:29.786251  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:50:29.819064  171911 cri.go:89] found id: ""
	I0903 23:50:29.819095  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.819103  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:50:29.819109  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:50:29.819164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:50:29.853192  171911 cri.go:89] found id: ""
	I0903 23:50:29.853223  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.853247  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:50:29.853255  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:50:29.853324  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:50:29.885949  171911 cri.go:89] found id: ""
	I0903 23:50:29.885979  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.885991  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:50:29.885999  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:50:29.886051  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:50:29.920423  171911 cri.go:89] found id: ""
	I0903 23:50:29.920451  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.920458  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:50:29.920464  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:50:29.920516  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:50:29.955106  171911 cri.go:89] found id: ""
	I0903 23:50:29.955142  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.955153  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:50:29.955161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:50:29.955241  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:50:29.988125  171911 cri.go:89] found id: ""
	I0903 23:50:29.988151  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.988159  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:50:29.988166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:50:29.988220  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:50:30.022768  171911 cri.go:89] found id: ""
	I0903 23:50:30.022795  171911 logs.go:282] 0 containers: []
	W0903 23:50:30.022803  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:50:30.022813  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:50:30.022828  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:50:30.059016  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:50:30.059049  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:50:30.108030  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:50:30.108065  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:50:30.121879  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:50:30.121906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:50:30.190324  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:50:30.190349  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:50:30.190362  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0903 23:50:30.296724  171911 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:50:30.296816  171911 out.go:285] * 
	W0903 23:50:30.296931  171911 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.296951  171911 out.go:285] * 
	W0903 23:50:30.299691  171911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:50:30.303743  171911 out.go:203] 
	W0903 23:50:30.304964  171911 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.305026  171911 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:50:30.305059  171911 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:50:30.306733  171911 out.go:203] 
	
	
	==> CRI-O <==
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.426690676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943431426669440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aeb7037a-d172-4b0b-8d81-cda7aea7ee87 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.427191537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cd8c51f-5977-44a3-8df1-0fcb754895e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.427277050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cd8c51f-5977-44a3-8df1-0fcb754895e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.427315182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8cd8c51f-5977-44a3-8df1-0fcb754895e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.457308223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c124ac45-4666-491c-9af8-dc0fe74843e6 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.457597102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c124ac45-4666-491c-9af8-dc0fe74843e6 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.459312586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35f191d7-78ae-40d7-9545-ef499869c90d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.459867049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943431459752846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35f191d7-78ae-40d7-9545-ef499869c90d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.460851875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93a62d74-6329-43a1-8880-18b6d77c6d2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.460924993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93a62d74-6329-43a1-8880-18b6d77c6d2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.460963761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=93a62d74-6329-43a1-8880-18b6d77c6d2f name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.492418576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=165f7513-bd3e-4bf7-b7d4-f0c6da6f0d2d name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.492498612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=165f7513-bd3e-4bf7-b7d4-f0c6da6f0d2d name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.493502859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f30b90f6-c7c9-4db7-afec-92647250e136 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.493876368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943431493855901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f30b90f6-c7c9-4db7-afec-92647250e136 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.494362165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=828de4fb-f7c7-48f1-86c2-317804ab2fa5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.494410922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=828de4fb-f7c7-48f1-86c2-317804ab2fa5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.494439721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=828de4fb-f7c7-48f1-86c2-317804ab2fa5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.526381967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb963553-98c6-4da0-a076-5d55b403029f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.526600188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb963553-98c6-4da0-a076-5d55b403029f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.528396526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4b83d75-4719-4a1b-9966-a8fc1284ea97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.528796989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943431528776641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4b83d75-4719-4a1b-9966-a8fc1284ea97 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.529309740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=924cf225-cf9e-4003-a6a6-e1b75955c6fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.529356962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=924cf225-cf9e-4003-a6a6-e1b75955c6fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:50:31 old-k8s-version-335468 crio[804]: time="2025-09-03 23:50:31.529386826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=924cf225-cf9e-4003-a6a6-e1b75955c6fb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002453] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.031954] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108082] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.035422] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 3 23:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:50:31 up 8 min,  0 users,  load average: 0.01, 0.10, 0.07
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleError(0x4f04d00, 0xc0001afee0)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc000254380, 0x4f04d00, 0xc0001afe90)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000cb7ef0)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006efef0, 0x4f0ac20, 0xc000bfdc70, 0x1, 0xc0001020c0)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254380, 0xc0001020c0)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bf3940, 0xc0002f33c0)
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 03 23:50:29 old-k8s-version-335468 kubelet[6939]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 03 23:50:30 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 03 23:50:30 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 03 23:50:30 old-k8s-version-335468 kubelet[7017]: I0903 23:50:30.401352    7017 server.go:416] Version: v1.20.0
	Sep 03 23:50:30 old-k8s-version-335468 kubelet[7017]: I0903 23:50:30.401744    7017 server.go:837] Client rotation is on, will bootstrap in background
	Sep 03 23:50:30 old-k8s-version-335468 kubelet[7017]: I0903 23:50:30.404350    7017 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 03 23:50:30 old-k8s-version-335468 kubelet[7017]: W0903 23:50:30.405380    7017 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 03 23:50:30 old-k8s-version-335468 kubelet[7017]: I0903 23:50:30.405584    7017 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (239.484931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (513.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:50:44.140062  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:50:58.234163  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:51:03.160680  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:52:01.592261  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:52:46.320380  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:52:48.517882  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:53:37.764352  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:53:59.138844  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:54:09.385234  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:54:12.838838  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:54:49.580658  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:55:00.831203  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:55:06.323838  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:55:35.903019  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:55:44.140270  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:55:58.233867  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:56:03.161050  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:56:12.647974  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:56:29.388063  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:57:01.592026  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:57:07.202973  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:57:21.299664  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:57:46.320517  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:57:48.517960  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:58:24.655231  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:58:37.764430  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:58:59.138940  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:59:06.247697  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:59:11.583336  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:59:12.839070  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (248.155316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (229.442143ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ embed-certs-088493 image list --format=json                                                                                                                                                                                                 │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ default-k8s-diff-port-799704 image list --format=json                                                                                                                                                                                       │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-959437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ stop    │ -p newest-cni-959437 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-959437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ image   │ newest-cni-959437 image list --format=json                                                                                                                                                                                                  │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ pause   │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ unpause │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ stop    │ -p old-k8s-version-335468 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0 │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:41:58
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:41:58.777140  171911 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:41:58.777406  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777416  171911 out.go:374] Setting ErrFile to fd 2...
	I0903 23:41:58.777422  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777607  171911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:41:58.778141  171911 out.go:368] Setting JSON to false
	I0903 23:41:58.779000  171911 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8663,"bootTime":1756934256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:41:58.779090  171911 start.go:140] virtualization: kvm guest
	I0903 23:41:58.781253  171911 out.go:179] * [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:41:58.782571  171911 notify.go:220] Checking for updates...
	I0903 23:41:58.782584  171911 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:41:58.783694  171911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:41:58.784604  171911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:58.785686  171911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:41:58.786886  171911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:41:58.787874  171911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:41:58.789111  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:41:58.789531  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.789581  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.804713  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0903 23:41:58.805180  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.805760  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.805799  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.806176  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.806424  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.808193  171911 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0903 23:41:58.809451  171911 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:41:58.809758  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.809795  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.825067  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0903 23:41:58.825609  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.826091  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.826116  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.826506  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.826651  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.862143  171911 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:41:58.863156  171911 start.go:304] selected driver: kvm2
	I0903 23:41:58.863168  171911 start.go:918] validating driver "kvm2" against &{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.863278  171911 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:41:58.863960  171911 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.864040  171911 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:41:58.879770  171911 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:41:58.880346  171911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:41:58.880393  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:41:58.880445  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:58.880503  171911 start.go:348] cluster config:
	{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.880659  171911 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.882387  171911 out.go:179] * Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	I0903 23:41:58.883545  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:41:58.883582  171911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:41:58.883591  171911 cache.go:58] Caching tarball of preloaded images
	I0903 23:41:58.883679  171911 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:41:58.883689  171911 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:41:58.883774  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:41:58.883966  171911 start.go:360] acquireMachinesLock for old-k8s-version-335468: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:41:58.884013  171911 start.go:364] duration metric: took 27.848µs to acquireMachinesLock for "old-k8s-version-335468"
	I0903 23:41:58.884027  171911 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:41:58.884034  171911 fix.go:54] fixHost starting: 
	I0903 23:41:58.884290  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.884339  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.899629  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0903 23:41:58.900295  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.901063  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.901090  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.901496  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.901698  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.901857  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetState
	I0903 23:41:58.903463  171911 fix.go:112] recreateIfNeeded on old-k8s-version-335468: state=Stopped err=<nil>
	I0903 23:41:58.903488  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	W0903 23:41:58.903630  171911 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:41:58.905426  171911 out.go:252] * Restarting existing kvm2 VM for "old-k8s-version-335468" ...
	I0903 23:41:58.905455  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .Start
	I0903 23:41:58.905612  171911 main.go:141] libmachine: (old-k8s-version-335468) starting domain...
	I0903 23:41:58.905634  171911 main.go:141] libmachine: (old-k8s-version-335468) ensuring networks are active...
	I0903 23:41:58.906424  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network default is active
	I0903 23:41:58.906730  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network mk-old-k8s-version-335468 is active
	I0903 23:41:58.907059  171911 main.go:141] libmachine: (old-k8s-version-335468) getting domain XML...
	I0903 23:41:58.907800  171911 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:42:00.140356  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for IP...
	I0903 23:42:00.141202  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.141582  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.141709  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.141590  171947 retry.go:31] will retry after 276.832755ms: waiting for domain to come up
	I0903 23:42:00.420407  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.420855  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.420917  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.420836  171947 retry.go:31] will retry after 314.668622ms: waiting for domain to come up
	I0903 23:42:00.737468  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.737871  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.737901  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.737828  171947 retry.go:31] will retry after 345.8826ms: waiting for domain to come up
	I0903 23:42:01.085701  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.086185  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.086217  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.086168  171947 retry.go:31] will retry after 426.296812ms: waiting for domain to come up
	I0903 23:42:01.513991  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.514453  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.514482  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.514426  171947 retry.go:31] will retry after 602.972692ms: waiting for domain to come up
	I0903 23:42:02.119438  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.119856  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.119885  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.119827  171947 retry.go:31] will retry after 798.351499ms: waiting for domain to come up
	I0903 23:42:02.919839  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.920276  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.920307  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.920220  171947 retry.go:31] will retry after 1.022190105s: waiting for domain to come up
	I0903 23:42:03.944354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:03.944807  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:03.944840  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:03.944747  171947 retry.go:31] will retry after 1.29364095s: waiting for domain to come up
	I0903 23:42:05.240165  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:05.240547  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:05.240578  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:05.240525  171947 retry.go:31] will retry after 1.368503788s: waiting for domain to come up
	I0903 23:42:06.611109  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:06.611618  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:06.611652  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:06.611578  171947 retry.go:31] will retry after 2.084047059s: waiting for domain to come up
	I0903 23:42:08.698604  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:08.699065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:08.699089  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:08.699048  171947 retry.go:31] will retry after 2.491740737s: waiting for domain to come up
	I0903 23:42:11.193535  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:11.194024  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:11.194066  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:11.194000  171947 retry.go:31] will retry after 2.442590545s: waiting for domain to come up
	I0903 23:42:13.638462  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:13.638791  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:13.638812  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:13.638754  171947 retry.go:31] will retry after 4.493184117s: waiting for domain to come up
	I0903 23:42:18.134025  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134463  171911 main.go:141] libmachine: (old-k8s-version-335468) found domain IP: 192.168.61.80
	I0903 23:42:18.134496  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has current primary IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134511  171911 main.go:141] libmachine: (old-k8s-version-335468) reserving static IP address...
	I0903 23:42:18.134886  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.134919  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | skip adding static IP to network mk-old-k8s-version-335468 - found existing host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"}
	I0903 23:42:18.134935  171911 main.go:141] libmachine: (old-k8s-version-335468) reserved static IP address 192.168.61.80 for domain old-k8s-version-335468
	I0903 23:42:18.134949  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for SSH...
	I0903 23:42:18.134965  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:42:18.137067  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137412  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.137435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137591  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:42:18.137615  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:42:18.137661  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:42:18.137678  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:42:18.137689  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:42:18.265417  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: <nil>: 
	I0903 23:42:18.265809  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:42:18.266396  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.269013  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269322  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.269352  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269559  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:42:18.269795  171911 machine.go:93] provisionDockerMachine start ...
	I0903 23:42:18.269824  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:18.270044  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.272246  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272543  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.272584  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272665  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.272846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.272997  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.273116  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.273294  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.273564  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.273578  171911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:42:18.389858  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:42:18.389891  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390184  171911 buildroot.go:166] provisioning hostname "old-k8s-version-335468"
	I0903 23:42:18.390213  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390400  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.393065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393474  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.393508  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393629  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.393787  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.393963  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.394113  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.394288  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.394494  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.394507  171911 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-335468 && echo "old-k8s-version-335468" | sudo tee /etc/hostname
	I0903 23:42:18.526146  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-335468
	
	I0903 23:42:18.526174  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.528979  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529317  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.529341  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529521  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.529715  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.529887  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.530039  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.530198  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.530443  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.530462  171911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-335468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-335468/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-335468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:42:18.655502  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:42:18.655540  171911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:42:18.655578  171911 buildroot.go:174] setting up certificates
	I0903 23:42:18.655591  171911 provision.go:84] configureAuth start
	I0903 23:42:18.655604  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.655930  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.658889  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659364  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.659393  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659574  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.661700  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.661987  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.662012  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.662134  171911 provision.go:143] copyHostCerts
	I0903 23:42:18.662197  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:42:18.662222  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:42:18.662298  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:42:18.662418  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:42:18.662431  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:42:18.662468  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:42:18.662563  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:42:18.662573  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:42:18.662606  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:42:18.662675  171911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-335468 san=[127.0.0.1 192.168.61.80 localhost minikube old-k8s-version-335468]
	I0903 23:42:18.981415  171911 provision.go:177] copyRemoteCerts
	I0903 23:42:18.981472  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:42:18.981497  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.983969  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984256  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.984285  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984430  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.984657  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.984813  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.984946  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.073026  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:42:19.100256  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0903 23:42:19.127225  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:42:19.154111  171911 provision.go:87] duration metric: took 498.506096ms to configureAuth
	I0903 23:42:19.154138  171911 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:42:19.154358  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:42:19.154450  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.157159  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157588  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.157613  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157774  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.157993  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158192  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158345  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.158511  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.158713  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.158727  171911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:42:19.403450  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:42:19.403503  171911 machine.go:96] duration metric: took 1.133688609s to provisionDockerMachine
	I0903 23:42:19.403516  171911 start.go:293] postStartSetup for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:42:19.403546  171911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:42:19.403575  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.403961  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:42:19.403992  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.406435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406792  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.406820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406954  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.407146  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.407310  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.407431  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.498010  171911 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:42:19.502446  171911 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:42:19.502472  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:42:19.502533  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:42:19.502606  171911 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:42:19.502691  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:42:19.513148  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:19.539923  171911 start.go:296] duration metric: took 136.378767ms for postStartSetup
	I0903 23:42:19.539966  171911 fix.go:56] duration metric: took 20.655932447s for fixHost
	I0903 23:42:19.539987  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.542771  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543135  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.543163  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543432  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.543661  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.543924  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.544083  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.544239  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.544450  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.544464  171911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:42:19.658283  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942939.619184337
	
	I0903 23:42:19.658310  171911 fix.go:216] guest clock: 1756942939.619184337
	I0903 23:42:19.658320  171911 fix.go:229] Guest: 2025-09-03 23:42:19.619184337 +0000 UTC Remote: 2025-09-03 23:42:19.539969783 +0000 UTC m=+20.799287975 (delta=79.214554ms)
	I0903 23:42:19.658340  171911 fix.go:200] guest clock delta is within tolerance: 79.214554ms
	I0903 23:42:19.658346  171911 start.go:83] releasing machines lock for "old-k8s-version-335468", held for 20.774323746s
	I0903 23:42:19.658367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.658686  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:19.661465  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.661820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.661848  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.662028  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662525  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662702  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662785  171911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:42:19.662846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.662927  171911 ssh_runner.go:195] Run: cat /version.json
	I0903 23:42:19.662943  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.665354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665683  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665718  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.665740  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665938  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666142  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.666154  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666167  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.666342  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666528  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666520  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.666673  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666795  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.778070  171911 ssh_runner.go:195] Run: systemctl --version
	I0903 23:42:19.783809  171911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:42:19.925729  171911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:42:19.931814  171911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:42:19.931870  171911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:42:19.950008  171911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:42:19.950038  171911 start.go:495] detecting cgroup driver to use...
	I0903 23:42:19.950104  171911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:42:19.969078  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:42:19.984800  171911 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:42:19.984862  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:42:19.999909  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:42:20.014636  171911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:42:20.158742  171911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:42:20.297981  171911 docker.go:234] disabling docker service ...
	I0903 23:42:20.298074  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:42:20.314384  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:42:20.327885  171911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:42:20.530158  171911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:42:20.665612  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:42:20.680150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:42:20.700792  171911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:42:20.700857  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.712182  171911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:42:20.712258  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.723777  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.734863  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.746438  171911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:42:20.759910  171911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:42:20.769436  171911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:42:20.769493  171911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:42:20.788756  171911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:42:20.799437  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:20.954989  171911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:42:21.072550  171911 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:42:21.072649  171911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:42:21.077536  171911 start.go:563] Will wait 60s for crictl version
	I0903 23:42:21.077592  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:21.081093  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:42:21.119015  171911 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:42:21.119097  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.146341  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.176700  171911 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:42:21.177731  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:21.180269  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180568  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:21.180599  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180856  171911 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:42:21.185094  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:21.198784  171911 kubeadm.go:875] updating cluster {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:42:21.198887  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:42:21.198930  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:21.245403  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:21.245474  171911 ssh_runner.go:195] Run: which lz4
	I0903 23:42:21.249531  171911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:42:21.253934  171911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:42:21.253970  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:42:22.735338  171911 crio.go:462] duration metric: took 1.48583725s to copy over tarball
	I0903 23:42:22.735409  171911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:42:24.901192  171911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.165749867s)
	I0903 23:42:24.901224  171911 crio.go:469] duration metric: took 2.165856963s to extract the tarball
	I0903 23:42:24.901234  171911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:42:24.945210  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:24.977983  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:24.978011  171911 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:42:24.978093  171911 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:24.978095  171911 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.978122  171911 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.978134  171911 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.978092  171911 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.978167  171911 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.978180  171911 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.978151  171911 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979632  171911 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.979647  171911 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.979664  171911 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.979669  171911 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.979651  171911 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979683  171911 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.979708  171911 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.979715  171911 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:25.139789  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.149556  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.153427  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.156447  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.166085  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.178841  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.180227  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:42:25.223305  171911 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:42:25.223359  171911 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.223398  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.287785  171911 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:42:25.287834  171911 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.287879  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303285  171911 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:42:25.303336  171911 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.303345  171911 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:42:25.303383  171911 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.303392  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303431  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311751  171911 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:42:25.311798  171911 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.311803  171911 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:42:25.311842  171911 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.311855  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311888  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324120  171911 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:42:25.324164  171911 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:42:25.324187  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.324202  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324241  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.324655  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.324678  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.324906  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.325033  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.422314  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.422412  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.436779  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.479512  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.482280  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.482370  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.482417  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.528977  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.529015  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.566801  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.639814  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.639829  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.680104  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:42:25.680249  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.680257  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:42:25.724922  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:42:25.747501  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:42:25.747577  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:42:25.751768  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:42:25.760936  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:42:26.285671  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:26.426376  171911 cache_images.go:93] duration metric: took 1.448344647s to LoadCachedImages
	W0903 23:42:26.426480  171911 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0903 23:42:26.426499  171911 kubeadm.go:926] updating node { 192.168.61.80 8443 v1.20.0 crio true true} ...
	I0903 23:42:26.426618  171911 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-335468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:42:26.426702  171911 ssh_runner.go:195] Run: crio config
	I0903 23:42:26.476895  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:42:26.476919  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:42:26.476933  171911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:42:26.476956  171911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-335468 NodeName:old-k8s-version-335468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:42:26.477114  171911 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-335468"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:42:26.477233  171911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:42:26.490694  171911 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:42:26.490775  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:42:26.501798  171911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0903 23:42:26.520806  171911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:42:26.539068  171911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0903 23:42:26.558168  171911 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0903 23:42:26.562134  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:26.575449  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:26.711961  171911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:42:26.759354  171911 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468 for IP: 192.168.61.80
	I0903 23:42:26.759380  171911 certs.go:194] generating shared ca certs ...
	I0903 23:42:26.759407  171911 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:26.759577  171911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:42:26.759632  171911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:42:26.759646  171911 certs.go:256] generating profile certs ...
	I0903 23:42:26.759743  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key
	I0903 23:42:26.759820  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629
	I0903 23:42:26.759878  171911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key
	I0903 23:42:26.760013  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:42:26.760052  171911 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:42:26.760066  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:42:26.760099  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:42:26.760133  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:42:26.760167  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:42:26.760220  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:26.760811  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:42:26.791932  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:42:26.824575  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:42:26.853358  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:42:26.887411  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:42:26.914421  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:42:26.940984  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:42:26.968279  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:42:26.995059  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:42:27.023211  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:42:27.049929  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:42:27.076578  171911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:42:27.095209  171911 ssh_runner.go:195] Run: openssl version
	I0903 23:42:27.100879  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:42:27.112933  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118040  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118090  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.125341  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:42:27.140002  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:42:27.154488  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159574  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159635  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.166580  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:42:27.180666  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:42:27.194853  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199793  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199841  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.206851  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:42:27.221163  171911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:42:27.226347  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:42:27.233982  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:42:27.241290  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:42:27.248464  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:42:27.255916  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:42:27.263308  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:42:27.270533  171911 kubeadm.go:392] StartCluster: {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:42:27.270648  171911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:42:27.270739  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.306525  171911 cri.go:89] found id: ""
	I0903 23:42:27.306598  171911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:42:27.318570  171911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:42:27.318592  171911 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:42:27.318639  171911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:42:27.329789  171911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:42:27.330196  171911 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:42:27.330362  171911 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-335468" cluster setting kubeconfig missing "old-k8s-version-335468" context setting]
	I0903 23:42:27.330702  171911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:27.374758  171911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:42:27.386214  171911 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.61.80
	I0903 23:42:27.386258  171911 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:42:27.386272  171911 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:42:27.386331  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.425149  171911 cri.go:89] found id: ""
	I0903 23:42:27.425215  171911 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:42:27.445596  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:42:27.456478  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:42:27.456499  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:42:27.456562  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:42:27.466434  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:42:27.466490  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:42:27.477542  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:42:27.487494  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:42:27.487556  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:42:27.498329  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.508036  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:42:27.508096  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.521941  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:42:27.531852  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:42:27.531907  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:42:27.542155  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:42:27.553239  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:27.633226  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.602124  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.854495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.947073  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:29.027974  171911 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:42:29.028070  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:29.528786  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.029080  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.529093  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.029115  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.528486  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.029181  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.528450  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.028477  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.529071  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.028981  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.528195  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.028453  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.528706  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.028199  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.528759  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.528169  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.528882  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.028560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.528880  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.029029  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.528664  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.028784  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.528383  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.028492  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.528853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.028647  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.528940  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.028219  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.528661  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.029081  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.528521  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.028610  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.529168  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.028585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.528452  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.028847  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.528533  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.028538  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.529012  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.029175  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.528266  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.028443  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.528936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.028174  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.528782  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.028946  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.529016  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.029217  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.528827  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.028743  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.528564  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.029013  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.528850  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.028379  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.528543  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.028863  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.528547  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.028618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.528316  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.028825  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.528728  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.028929  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.528618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.028774  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.528830  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.028902  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.528997  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.028460  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.529085  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.028814  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.528240  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.028382  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.528648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.028776  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.528630  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.028650  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.528498  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.028874  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.529055  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.028335  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.528817  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.029166  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.528517  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.028284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.528580  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.028324  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.528516  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.028872  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.529100  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.029032  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.528427  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.028297  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.528182  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.028871  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.528931  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.028363  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.528960  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.028522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.528560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.028879  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.528155  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.028536  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.528372  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.028985  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.529094  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.028627  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.529025  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.028457  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.528968  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.028323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.528323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.028859  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.528886  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.028648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.528292  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.028496  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.528556  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:29.028482  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:29.028567  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:29.065203  171911 cri.go:89] found id: ""
	I0903 23:43:29.065238  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.065249  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:29.065257  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:29.065323  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:29.099969  171911 cri.go:89] found id: ""
	I0903 23:43:29.100008  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.100020  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:29.100030  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:29.100100  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:29.134038  171911 cri.go:89] found id: ""
	I0903 23:43:29.134075  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.134088  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:29.134096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:29.134166  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:29.167976  171911 cri.go:89] found id: ""
	I0903 23:43:29.168009  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.168018  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:29.168025  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:29.168081  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:29.203375  171911 cri.go:89] found id: ""
	I0903 23:43:29.203406  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.203414  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:29.203420  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:29.203487  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:29.237316  171911 cri.go:89] found id: ""
	I0903 23:43:29.237347  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.237358  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:29.237366  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:29.237456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:29.271010  171911 cri.go:89] found id: ""
	I0903 23:43:29.271036  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.271044  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:29.271051  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:29.271115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:29.305355  171911 cri.go:89] found id: ""
	I0903 23:43:29.305398  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.305410  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:29.305424  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:29.305450  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:29.343610  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:29.343647  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:29.390474  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:29.390513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:29.404227  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:29.404255  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:29.473354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:29.473377  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:29.473409  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.045578  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:32.064442  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:32.064510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:32.104125  171911 cri.go:89] found id: ""
	I0903 23:43:32.104153  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.104162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:32.104167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:32.104219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:32.140304  171911 cri.go:89] found id: ""
	I0903 23:43:32.140344  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.140357  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:32.140366  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:32.140436  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:32.174194  171911 cri.go:89] found id: ""
	I0903 23:43:32.174227  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.174241  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:32.174249  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:32.174322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:32.207732  171911 cri.go:89] found id: ""
	I0903 23:43:32.207760  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.207768  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:32.207775  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:32.207828  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:32.242885  171911 cri.go:89] found id: ""
	I0903 23:43:32.242919  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.242927  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:32.242934  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:32.242991  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:32.276911  171911 cri.go:89] found id: ""
	I0903 23:43:32.276938  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.276945  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:32.276952  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:32.277004  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:32.310660  171911 cri.go:89] found id: ""
	I0903 23:43:32.310689  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.310697  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:32.310703  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:32.310753  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:32.344285  171911 cri.go:89] found id: ""
	I0903 23:43:32.344316  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.344327  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:32.344341  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:32.344357  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:32.394031  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:32.394079  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:32.408165  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:32.408199  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:32.473250  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:32.473279  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:32.473293  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.556677  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:32.556722  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.104790  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:35.121004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:35.121069  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:35.153087  171911 cri.go:89] found id: ""
	I0903 23:43:35.153118  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.153126  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:35.153133  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:35.153187  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:35.185837  171911 cri.go:89] found id: ""
	I0903 23:43:35.185877  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.185885  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:35.185891  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:35.185947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:35.219367  171911 cri.go:89] found id: ""
	I0903 23:43:35.219410  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.219421  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:35.219430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:35.219491  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:35.253170  171911 cri.go:89] found id: ""
	I0903 23:43:35.253204  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.253218  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:35.253239  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:35.253325  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:35.285565  171911 cri.go:89] found id: ""
	I0903 23:43:35.285599  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.285611  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:35.285620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:35.285688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:35.319446  171911 cri.go:89] found id: ""
	I0903 23:43:35.319476  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.319484  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:35.319490  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:35.319541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:35.354359  171911 cri.go:89] found id: ""
	I0903 23:43:35.354387  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.354394  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:35.354400  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:35.354452  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:35.390780  171911 cri.go:89] found id: ""
	I0903 23:43:35.390815  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.390825  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:35.390837  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:35.390852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:35.465751  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:35.465790  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.504480  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:35.504517  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:35.554283  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:35.554318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:35.567404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:35.567436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:35.629663  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.130296  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:38.146915  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:38.147003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:38.179729  171911 cri.go:89] found id: ""
	I0903 23:43:38.179768  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.179781  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:38.179791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:38.179863  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:38.212185  171911 cri.go:89] found id: ""
	I0903 23:43:38.212215  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.212227  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:38.212235  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:38.212322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:38.245927  171911 cri.go:89] found id: ""
	I0903 23:43:38.245953  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.245960  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:38.245966  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:38.246027  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:38.280868  171911 cri.go:89] found id: ""
	I0903 23:43:38.280900  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.280911  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:38.280918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:38.281003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:38.321240  171911 cri.go:89] found id: ""
	I0903 23:43:38.321275  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.321288  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:38.321298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:38.321407  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:38.375140  171911 cri.go:89] found id: ""
	I0903 23:43:38.375169  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.375183  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:38.375191  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:38.375277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:38.418890  171911 cri.go:89] found id: ""
	I0903 23:43:38.418928  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.418940  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:38.418950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:38.419019  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:38.452908  171911 cri.go:89] found id: ""
	I0903 23:43:38.452938  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.452949  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:38.452962  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:38.452978  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:38.503416  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:38.503460  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:38.517203  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:38.517233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:38.580070  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.580096  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:38.580110  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:38.652380  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:38.652420  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.192031  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:41.208483  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:41.208546  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:41.241854  171911 cri.go:89] found id: ""
	I0903 23:43:41.241880  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.241887  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:41.241895  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:41.241953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:41.276043  171911 cri.go:89] found id: ""
	I0903 23:43:41.276070  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.276078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:41.276084  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:41.276136  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:41.312473  171911 cri.go:89] found id: ""
	I0903 23:43:41.312503  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.312514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:41.312522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:41.312591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:41.345515  171911 cri.go:89] found id: ""
	I0903 23:43:41.345543  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.345551  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:41.345558  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:41.345630  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:41.378505  171911 cri.go:89] found id: ""
	I0903 23:43:41.378539  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.378547  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:41.378554  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:41.378613  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:41.414245  171911 cri.go:89] found id: ""
	I0903 23:43:41.414276  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.414284  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:41.414290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:41.414351  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:41.450931  171911 cri.go:89] found id: ""
	I0903 23:43:41.450969  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.450981  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:41.451050  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:41.451126  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:41.484869  171911 cri.go:89] found id: ""
	I0903 23:43:41.484898  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.484906  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:41.484916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:41.484934  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:41.498189  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:41.498219  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:41.560558  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:41.560583  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:41.560601  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:41.637195  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:41.637235  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.675448  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:41.675478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.223401  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:44.253341  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:44.253423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:44.300478  171911 cri.go:89] found id: ""
	I0903 23:43:44.300512  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.300523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:44.300531  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:44.300625  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:44.342127  171911 cri.go:89] found id: ""
	I0903 23:43:44.342158  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.342166  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:44.342178  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:44.342242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:44.392479  171911 cri.go:89] found id: ""
	I0903 23:43:44.392505  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.392514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:44.392522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:44.392587  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:44.428584  171911 cri.go:89] found id: ""
	I0903 23:43:44.428627  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.428646  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:44.428655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:44.428724  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:44.463165  171911 cri.go:89] found id: ""
	I0903 23:43:44.463196  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.463205  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:44.463214  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:44.463276  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:44.497562  171911 cri.go:89] found id: ""
	I0903 23:43:44.497599  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.497606  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:44.497616  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:44.497671  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:44.532319  171911 cri.go:89] found id: ""
	I0903 23:43:44.532349  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.532356  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:44.532371  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:44.532431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:44.567181  171911 cri.go:89] found id: ""
	I0903 23:43:44.567214  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.567229  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:44.567242  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:44.567259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:44.647186  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:44.647237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:44.684779  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:44.684815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.734346  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:44.734384  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:44.748304  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:44.748333  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:44.811995  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.313737  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:47.330976  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:47.331047  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:47.365152  171911 cri.go:89] found id: ""
	I0903 23:43:47.365183  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.365191  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:47.365198  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:47.365250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:47.402002  171911 cri.go:89] found id: ""
	I0903 23:43:47.402034  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.402042  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:47.402048  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:47.402103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:47.439574  171911 cri.go:89] found id: ""
	I0903 23:43:47.439611  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.439619  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:47.439626  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:47.439694  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:47.474877  171911 cri.go:89] found id: ""
	I0903 23:43:47.474910  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.474918  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:47.474925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:47.474980  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:47.511850  171911 cri.go:89] found id: ""
	I0903 23:43:47.511882  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.511889  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:47.511896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:47.511952  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:47.545975  171911 cri.go:89] found id: ""
	I0903 23:43:47.546011  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.546022  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:47.546032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:47.546091  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:47.581967  171911 cri.go:89] found id: ""
	I0903 23:43:47.581996  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.582004  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:47.582010  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:47.582079  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:47.617442  171911 cri.go:89] found id: ""
	I0903 23:43:47.617470  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.617478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:47.617487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:47.617499  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:47.655119  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:47.655150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:47.702001  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:47.702035  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:47.715671  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:47.715701  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:47.781271  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.781297  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:47.781310  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.353562  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:50.370200  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:50.370271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:50.404593  171911 cri.go:89] found id: ""
	I0903 23:43:50.404621  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.404631  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:50.404640  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:50.404714  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:50.438454  171911 cri.go:89] found id: ""
	I0903 23:43:50.438482  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.438491  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:50.438498  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:50.438609  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:50.474138  171911 cri.go:89] found id: ""
	I0903 23:43:50.474165  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.474176  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:50.474184  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:50.474247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:50.506277  171911 cri.go:89] found id: ""
	I0903 23:43:50.506308  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.506319  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:50.506328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:50.506398  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:50.540877  171911 cri.go:89] found id: ""
	I0903 23:43:50.540905  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.540912  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:50.540918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:50.540969  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:50.574490  171911 cri.go:89] found id: ""
	I0903 23:43:50.574548  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.574567  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:50.574578  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:50.574654  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:50.608197  171911 cri.go:89] found id: ""
	I0903 23:43:50.608225  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.608233  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:50.608238  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:50.608288  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:50.641053  171911 cri.go:89] found id: ""
	I0903 23:43:50.641082  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.641089  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:50.641099  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:50.641109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.712696  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:50.712742  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:50.749969  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:50.750001  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:50.800039  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:50.800074  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:50.813705  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:50.813736  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:50.876873  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.378585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:53.395927  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:53.395997  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:53.429784  171911 cri.go:89] found id: ""
	I0903 23:43:53.429814  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.429821  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:53.429827  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:53.429880  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:53.463718  171911 cri.go:89] found id: ""
	I0903 23:43:53.463745  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.463753  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:53.463759  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:53.463815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:53.499017  171911 cri.go:89] found id: ""
	I0903 23:43:53.499046  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.499056  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:53.499065  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:53.499132  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:53.534239  171911 cri.go:89] found id: ""
	I0903 23:43:53.534273  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.534283  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:53.534290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:53.534353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:53.567405  171911 cri.go:89] found id: ""
	I0903 23:43:53.567431  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.567438  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:53.567445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:53.567500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:53.603686  171911 cri.go:89] found id: ""
	I0903 23:43:53.603722  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.603733  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:53.603742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:53.603805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:53.638591  171911 cri.go:89] found id: ""
	I0903 23:43:53.638618  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.638627  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:53.638635  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:53.638698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:53.672243  171911 cri.go:89] found id: ""
	I0903 23:43:53.672288  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.672296  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:53.672305  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:53.672318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:53.721410  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:53.721448  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:53.735356  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:53.735386  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:53.797966  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.797988  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:53.798005  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:53.872491  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:53.872529  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.410853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:56.427796  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:56.427871  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:56.460023  171911 cri.go:89] found id: ""
	I0903 23:43:56.460066  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.460077  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:56.460085  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:56.460160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:56.494386  171911 cri.go:89] found id: ""
	I0903 23:43:56.494414  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.494424  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:56.494432  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:56.494492  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:56.529298  171911 cri.go:89] found id: ""
	I0903 23:43:56.529329  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.529339  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:56.529356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:56.529433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:56.562775  171911 cri.go:89] found id: ""
	I0903 23:43:56.562818  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.562830  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:56.562837  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:56.562898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:56.604698  171911 cri.go:89] found id: ""
	I0903 23:43:56.604739  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.604751  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:56.604758  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:56.604811  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:56.644278  171911 cri.go:89] found id: ""
	I0903 23:43:56.644307  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.644319  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:56.644328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:56.644397  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:56.686334  171911 cri.go:89] found id: ""
	I0903 23:43:56.686366  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.686377  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:56.686385  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:56.686458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:56.725441  171911 cri.go:89] found id: ""
	I0903 23:43:56.725466  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.725486  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:56.725494  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:56.725508  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:56.791969  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:56.792002  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:56.792021  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:56.866297  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:56.866338  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.904335  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:56.904372  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:56.952822  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:56.952863  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:59.466793  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:59.484556  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:59.484633  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:59.521818  171911 cri.go:89] found id: ""
	I0903 23:43:59.521848  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.521860  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:59.521868  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:59.521945  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:59.556474  171911 cri.go:89] found id: ""
	I0903 23:43:59.556501  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.556509  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:59.556515  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:59.556569  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:59.591410  171911 cri.go:89] found id: ""
	I0903 23:43:59.591440  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.591447  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:59.591453  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:59.591503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:59.625559  171911 cri.go:89] found id: ""
	I0903 23:43:59.625587  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.625593  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:59.625615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:59.625668  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:59.659603  171911 cri.go:89] found id: ""
	I0903 23:43:59.659635  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.659643  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:59.659655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:59.659713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:59.700514  171911 cri.go:89] found id: ""
	I0903 23:43:59.700553  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.700566  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:59.700576  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:59.700669  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:59.734778  171911 cri.go:89] found id: ""
	I0903 23:43:59.734805  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.734816  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:59.734824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:59.734884  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:59.769663  171911 cri.go:89] found id: ""
	I0903 23:43:59.769703  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.769714  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:59.769727  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:59.769743  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:59.832033  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:59.832056  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:59.832075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:59.905304  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:59.905348  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:59.942790  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:59.942823  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:59.992617  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:59.992660  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.508378  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:02.525572  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:02.525652  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:02.561330  171911 cri.go:89] found id: ""
	I0903 23:44:02.561361  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.561369  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:02.561375  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:02.561461  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:02.595933  171911 cri.go:89] found id: ""
	I0903 23:44:02.595962  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.595970  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:02.595975  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:02.596041  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:02.628817  171911 cri.go:89] found id: ""
	I0903 23:44:02.628854  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.628865  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:02.628873  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:02.628944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:02.665027  171911 cri.go:89] found id: ""
	I0903 23:44:02.665060  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.665072  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:02.665079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:02.665143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:02.698721  171911 cri.go:89] found id: ""
	I0903 23:44:02.698752  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.698761  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:02.698768  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:02.698822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:02.736138  171911 cri.go:89] found id: ""
	I0903 23:44:02.736170  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.736180  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:02.736188  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:02.736254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:02.770089  171911 cri.go:89] found id: ""
	I0903 23:44:02.770120  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.770127  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:02.770134  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:02.770201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:02.805595  171911 cri.go:89] found id: ""
	I0903 23:44:02.805627  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.805638  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:02.805650  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:02.805666  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:02.855714  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:02.855753  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.870817  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:02.870854  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:02.935987  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:02.936011  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:02.936025  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:03.013471  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:03.013513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:05.553522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:05.570805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:05.570869  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:05.606023  171911 cri.go:89] found id: ""
	I0903 23:44:05.606061  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.606075  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:05.606084  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:05.606151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:05.640331  171911 cri.go:89] found id: ""
	I0903 23:44:05.640362  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.640374  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:05.640380  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:05.640455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:05.675579  171911 cri.go:89] found id: ""
	I0903 23:44:05.675613  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.675626  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:05.675634  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:05.675698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:05.710190  171911 cri.go:89] found id: ""
	I0903 23:44:05.710219  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.710226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:05.710233  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:05.710292  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:05.745803  171911 cri.go:89] found id: ""
	I0903 23:44:05.745834  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.745843  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:05.745850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:05.745908  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:05.780095  171911 cri.go:89] found id: ""
	I0903 23:44:05.780126  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.780134  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:05.780141  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:05.780193  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:05.812816  171911 cri.go:89] found id: ""
	I0903 23:44:05.812849  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.812862  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:05.812870  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:05.812944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:05.845992  171911 cri.go:89] found id: ""
	I0903 23:44:05.846024  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.846032  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:05.846041  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:05.846053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:05.896122  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:05.896163  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:05.910777  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:05.910815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:05.973743  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:05.973771  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:05.973784  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:06.047880  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:06.047924  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.588751  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:08.605926  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:08.605989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:08.639229  171911 cri.go:89] found id: ""
	I0903 23:44:08.639260  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.639268  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:08.639275  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:08.639332  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:08.673218  171911 cri.go:89] found id: ""
	I0903 23:44:08.673263  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.673274  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:08.673283  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:08.673353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:08.708635  171911 cri.go:89] found id: ""
	I0903 23:44:08.708665  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.708676  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:08.708685  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:08.708755  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:08.744277  171911 cri.go:89] found id: ""
	I0903 23:44:08.744304  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.744311  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:08.744318  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:08.744385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:08.778421  171911 cri.go:89] found id: ""
	I0903 23:44:08.778451  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.778469  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:08.778477  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:08.778541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:08.815240  171911 cri.go:89] found id: ""
	I0903 23:44:08.815277  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.815290  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:08.815298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:08.815371  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:08.849900  171911 cri.go:89] found id: ""
	I0903 23:44:08.849929  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.849936  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:08.849942  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:08.849993  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:08.885596  171911 cri.go:89] found id: ""
	I0903 23:44:08.885631  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.885641  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:08.885651  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:08.885668  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.924882  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:08.924909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:08.976269  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:08.976304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:08.993447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:08.993483  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:09.069817  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:09.069845  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:09.069862  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:11.651779  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:11.668352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:11.668423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:11.703206  171911 cri.go:89] found id: ""
	I0903 23:44:11.703243  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.703255  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:11.703264  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:11.703357  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:11.737323  171911 cri.go:89] found id: ""
	I0903 23:44:11.737367  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.737380  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:11.737402  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:11.737479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:11.771970  171911 cri.go:89] found id: ""
	I0903 23:44:11.772010  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.772021  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:11.772030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:11.772104  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:11.806342  171911 cri.go:89] found id: ""
	I0903 23:44:11.806386  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.806397  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:11.806406  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:11.806483  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:11.843136  171911 cri.go:89] found id: ""
	I0903 23:44:11.843170  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.843181  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:11.843189  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:11.843259  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:11.877246  171911 cri.go:89] found id: ""
	I0903 23:44:11.877285  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.877296  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:11.877306  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:11.877379  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:11.915257  171911 cri.go:89] found id: ""
	I0903 23:44:11.915295  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.915308  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:11.915317  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:11.915396  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:11.949271  171911 cri.go:89] found id: ""
	I0903 23:44:11.949300  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.949310  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:11.949323  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:11.949342  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:11.962921  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:11.962954  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:12.025549  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:12.025580  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:12.025596  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:12.099077  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:12.099120  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:12.136408  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:12.136446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:14.686632  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:14.704032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:14.704101  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:14.739046  171911 cri.go:89] found id: ""
	I0903 23:44:14.739076  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.739084  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:14.739091  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:14.739156  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:14.775028  171911 cri.go:89] found id: ""
	I0903 23:44:14.775066  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.775078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:14.775087  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:14.775150  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:14.808896  171911 cri.go:89] found id: ""
	I0903 23:44:14.808928  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.808939  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:14.808947  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:14.809014  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:14.844967  171911 cri.go:89] found id: ""
	I0903 23:44:14.844998  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.845010  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:14.845018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:14.845087  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:14.878706  171911 cri.go:89] found id: ""
	I0903 23:44:14.878734  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.878742  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:14.878750  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:14.878824  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:14.914368  171911 cri.go:89] found id: ""
	I0903 23:44:14.914407  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.914420  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:14.914429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:14.914523  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:14.949846  171911 cri.go:89] found id: ""
	I0903 23:44:14.949873  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.949881  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:14.949888  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:14.949956  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:14.985479  171911 cri.go:89] found id: ""
	I0903 23:44:14.985511  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.985522  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:14.985534  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:14.985550  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:15.036097  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:15.036141  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:15.050336  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:15.050365  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:15.116416  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:15.116439  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:15.116457  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:15.193453  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:15.193498  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:17.731284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:17.748791  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:17.748854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:17.784857  171911 cri.go:89] found id: ""
	I0903 23:44:17.784884  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.784892  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:17.784897  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:17.784953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:17.819838  171911 cri.go:89] found id: ""
	I0903 23:44:17.819867  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.819875  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:17.819881  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:17.819932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:17.853453  171911 cri.go:89] found id: ""
	I0903 23:44:17.853482  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.853489  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:17.853496  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:17.853553  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:17.887886  171911 cri.go:89] found id: ""
	I0903 23:44:17.887915  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.887923  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:17.887930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:17.887985  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:17.923140  171911 cri.go:89] found id: ""
	I0903 23:44:17.923172  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.923183  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:17.923190  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:17.923258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:17.957595  171911 cri.go:89] found id: ""
	I0903 23:44:17.957625  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.957638  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:17.957647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:17.957717  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:17.990247  171911 cri.go:89] found id: ""
	I0903 23:44:17.990276  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.990284  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:17.990290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:17.990362  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:18.024643  171911 cri.go:89] found id: ""
	I0903 23:44:18.024673  171911 logs.go:282] 0 containers: []
	W0903 23:44:18.024685  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:18.024697  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:18.024713  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:18.076397  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:18.076436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:18.090204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:18.090233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:18.163020  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:18.163044  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:18.163059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:18.240276  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:18.240314  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:20.781710  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:20.798871  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:20.798939  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:20.833834  171911 cri.go:89] found id: ""
	I0903 23:44:20.833867  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.833875  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:20.833881  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:20.833936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:20.868536  171911 cri.go:89] found id: ""
	I0903 23:44:20.868569  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.868577  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:20.868583  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:20.868639  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:20.902513  171911 cri.go:89] found id: ""
	I0903 23:44:20.902546  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.902557  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:20.902570  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:20.902644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:20.935967  171911 cri.go:89] found id: ""
	I0903 23:44:20.935994  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.936001  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:20.936007  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:20.936070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:20.969967  171911 cri.go:89] found id: ""
	I0903 23:44:20.969995  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.970003  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:20.970009  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:20.970067  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:21.005097  171911 cri.go:89] found id: ""
	I0903 23:44:21.005130  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.005149  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:21.005158  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:21.005231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:21.040315  171911 cri.go:89] found id: ""
	I0903 23:44:21.040350  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.040357  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:21.040364  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:21.040431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:21.075411  171911 cri.go:89] found id: ""
	I0903 23:44:21.075447  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.075456  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:21.075466  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:21.075478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:21.125281  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:21.125322  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:21.139605  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:21.139635  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:21.203960  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:21.203986  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:21.204004  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:21.278167  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:21.278211  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:23.820132  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:23.839119  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:23.839184  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:23.883827  171911 cri.go:89] found id: ""
	I0903 23:44:23.883864  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.883876  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:23.883884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:23.883943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:23.929729  171911 cri.go:89] found id: ""
	I0903 23:44:23.929756  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.929765  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:23.929771  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:23.929822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:23.962676  171911 cri.go:89] found id: ""
	I0903 23:44:23.962708  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.962716  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:23.962722  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:23.962778  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:23.995464  171911 cri.go:89] found id: ""
	I0903 23:44:23.995505  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.995516  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:23.995522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:23.995586  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:24.030690  171911 cri.go:89] found id: ""
	I0903 23:44:24.030718  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.030726  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:24.030733  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:24.030791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:24.064311  171911 cri.go:89] found id: ""
	I0903 23:44:24.064338  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.064346  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:24.064352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:24.064408  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:24.098888  171911 cri.go:89] found id: ""
	I0903 23:44:24.098917  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.098924  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:24.098930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:24.098990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:24.135030  171911 cri.go:89] found id: ""
	I0903 23:44:24.135057  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.135064  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:24.135074  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:24.135086  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:24.185228  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:24.185266  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:24.198908  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:24.198937  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:24.260291  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:24.260337  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:24.260355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:24.337581  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:24.337620  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:26.876959  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:26.893615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:26.893679  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:26.926745  171911 cri.go:89] found id: ""
	I0903 23:44:26.926776  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.926784  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:26.926791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:26.926848  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:26.959697  171911 cri.go:89] found id: ""
	I0903 23:44:26.959727  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.959735  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:26.959742  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:26.959795  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:26.991963  171911 cri.go:89] found id: ""
	I0903 23:44:26.991996  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.992004  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:26.992011  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:26.992064  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:27.025939  171911 cri.go:89] found id: ""
	I0903 23:44:27.025978  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.025989  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:27.025997  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:27.026065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:27.058572  171911 cri.go:89] found id: ""
	I0903 23:44:27.058598  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.058606  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:27.058612  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:27.058666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:27.092277  171911 cri.go:89] found id: ""
	I0903 23:44:27.092309  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.092318  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:27.092324  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:27.092385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:27.127742  171911 cri.go:89] found id: ""
	I0903 23:44:27.127777  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.127789  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:27.127798  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:27.127872  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:27.162425  171911 cri.go:89] found id: ""
	I0903 23:44:27.162463  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.162474  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:27.162487  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:27.162503  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:27.213126  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:27.213165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:27.226983  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:27.227013  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:27.293122  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:27.293152  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:27.293169  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:27.368497  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:27.368538  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:29.907183  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:29.924079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:29.924172  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:29.957813  171911 cri.go:89] found id: ""
	I0903 23:44:29.957843  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.957851  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:29.957857  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:29.957919  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:29.992782  171911 cri.go:89] found id: ""
	I0903 23:44:29.992812  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.992819  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:29.992826  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:29.992888  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:30.026629  171911 cri.go:89] found id: ""
	I0903 23:44:30.026664  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.026674  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:30.026682  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:30.026756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:30.060035  171911 cri.go:89] found id: ""
	I0903 23:44:30.060074  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.060083  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:30.060092  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:30.060154  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:30.101281  171911 cri.go:89] found id: ""
	I0903 23:44:30.101319  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.101330  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:30.101338  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:30.101419  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:30.146884  171911 cri.go:89] found id: ""
	I0903 23:44:30.146911  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.146918  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:30.146925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:30.146989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:30.180988  171911 cri.go:89] found id: ""
	I0903 23:44:30.181016  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.181024  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:30.181030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:30.181103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:30.214648  171911 cri.go:89] found id: ""
	I0903 23:44:30.214679  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.214687  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:30.214696  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:30.214709  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:30.262757  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:30.262799  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:30.283299  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:30.283331  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:30.366919  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:30.366945  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:30.366959  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:30.442612  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:30.442654  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:32.981733  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:32.999850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:32.999930  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:33.040618  171911 cri.go:89] found id: ""
	I0903 23:44:33.040653  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.040664  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:33.040671  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:33.040738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:33.081786  171911 cri.go:89] found id: ""
	I0903 23:44:33.081818  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.081829  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:33.081836  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:33.081906  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:33.125847  171911 cri.go:89] found id: ""
	I0903 23:44:33.125878  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.125888  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:33.125896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:33.125962  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:33.167437  171911 cri.go:89] found id: ""
	I0903 23:44:33.167465  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.167473  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:33.167481  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:33.167557  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:33.208145  171911 cri.go:89] found id: ""
	I0903 23:44:33.208177  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.208185  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:33.208192  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:33.208248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:33.250045  171911 cri.go:89] found id: ""
	I0903 23:44:33.250074  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.250081  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:33.250087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:33.250139  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:33.289576  171911 cri.go:89] found id: ""
	I0903 23:44:33.289607  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.289615  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:33.289621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:33.289676  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:33.325452  171911 cri.go:89] found id: ""
	I0903 23:44:33.325485  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.325493  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:33.325503  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:33.325515  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:33.403967  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:33.404018  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:33.441581  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:33.441619  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:33.488744  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:33.488794  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:33.502603  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:33.502648  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:33.567447  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.069781  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:36.093945  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:36.094023  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:36.138900  171911 cri.go:89] found id: ""
	I0903 23:44:36.138929  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.138940  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:36.138950  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:36.139016  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:36.174814  171911 cri.go:89] found id: ""
	I0903 23:44:36.174841  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.174849  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:36.174855  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:36.174918  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:36.211574  171911 cri.go:89] found id: ""
	I0903 23:44:36.211604  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.211611  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:36.211618  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:36.211670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:36.245780  171911 cri.go:89] found id: ""
	I0903 23:44:36.245812  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.245823  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:36.245830  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:36.245886  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:36.280576  171911 cri.go:89] found id: ""
	I0903 23:44:36.280606  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.280614  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:36.280620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:36.280674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:36.315469  171911 cri.go:89] found id: ""
	I0903 23:44:36.315504  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.315515  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:36.315524  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:36.315582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:36.349983  171911 cri.go:89] found id: ""
	I0903 23:44:36.350018  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.350027  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:36.350033  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:36.350083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:36.384827  171911 cri.go:89] found id: ""
	I0903 23:44:36.384857  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.384866  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:36.384877  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:36.384896  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:36.398999  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:36.399029  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:36.467458  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.467492  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:36.467507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:36.546881  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:36.546922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:36.584400  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:36.584437  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.135283  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:39.152700  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:39.152762  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:39.187286  171911 cri.go:89] found id: ""
	I0903 23:44:39.187333  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.187344  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:39.187351  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:39.187418  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:39.222904  171911 cri.go:89] found id: ""
	I0903 23:44:39.222932  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.222940  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:39.222946  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:39.223001  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:39.256820  171911 cri.go:89] found id: ""
	I0903 23:44:39.256849  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.256860  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:39.256867  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:39.256936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:39.290701  171911 cri.go:89] found id: ""
	I0903 23:44:39.290732  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.290742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:39.290748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:39.290814  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:39.325458  171911 cri.go:89] found id: ""
	I0903 23:44:39.325494  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.325505  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:39.325513  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:39.325577  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:39.358959  171911 cri.go:89] found id: ""
	I0903 23:44:39.358988  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.358996  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:39.359002  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:39.359070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:39.394031  171911 cri.go:89] found id: ""
	I0903 23:44:39.394058  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.394066  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:39.394072  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:39.394135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:39.428921  171911 cri.go:89] found id: ""
	I0903 23:44:39.428950  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.428961  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:39.428973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:39.428992  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.478303  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:39.478346  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:39.492136  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:39.492165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:39.556474  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:39.556499  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:39.556512  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:39.630384  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:39.630421  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:42.169783  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:42.186331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:42.186392  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:42.220630  171911 cri.go:89] found id: ""
	I0903 23:44:42.220658  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.220669  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:42.220678  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:42.220751  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:42.256274  171911 cri.go:89] found id: ""
	I0903 23:44:42.256310  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.256321  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:42.256329  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:42.256387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:42.289958  171911 cri.go:89] found id: ""
	I0903 23:44:42.289988  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.289998  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:42.290006  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:42.290065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:42.322425  171911 cri.go:89] found id: ""
	I0903 23:44:42.322453  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.322464  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:42.322473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:42.322537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:42.357459  171911 cri.go:89] found id: ""
	I0903 23:44:42.357494  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.357503  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:42.357509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:42.357588  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:42.390807  171911 cri.go:89] found id: ""
	I0903 23:44:42.390837  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.390845  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:42.390851  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:42.390924  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:42.424548  171911 cri.go:89] found id: ""
	I0903 23:44:42.424579  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.424590  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:42.424598  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:42.424667  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:42.459215  171911 cri.go:89] found id: ""
	I0903 23:44:42.459250  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.459261  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:42.459274  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:42.459290  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:42.505525  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:42.505560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:42.519712  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:42.519744  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:42.583576  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:42.583603  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:42.583618  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:42.660899  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:42.660936  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.200707  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:45.217299  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:45.217372  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:45.252045  171911 cri.go:89] found id: ""
	I0903 23:44:45.252073  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.252081  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:45.252087  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:45.252155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:45.287247  171911 cri.go:89] found id: ""
	I0903 23:44:45.287281  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.287289  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:45.287296  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:45.287353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:45.320423  171911 cri.go:89] found id: ""
	I0903 23:44:45.320450  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.320457  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:45.320463  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:45.320517  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:45.353147  171911 cri.go:89] found id: ""
	I0903 23:44:45.353179  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.353187  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:45.353193  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:45.353261  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:45.387052  171911 cri.go:89] found id: ""
	I0903 23:44:45.387080  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.387089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:45.387096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:45.387151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:45.422621  171911 cri.go:89] found id: ""
	I0903 23:44:45.422651  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.422659  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:45.422666  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:45.422734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:45.457224  171911 cri.go:89] found id: ""
	I0903 23:44:45.457258  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.457266  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:45.457274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:45.457339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:45.490659  171911 cri.go:89] found id: ""
	I0903 23:44:45.490685  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.490693  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:45.490706  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:45.490729  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:45.556871  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:45.556894  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:45.556909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:45.628062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:45.628101  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.666937  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:45.666977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:45.713545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:45.713580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.227552  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:48.245044  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:48.245118  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:48.279490  171911 cri.go:89] found id: ""
	I0903 23:44:48.279519  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.279529  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:48.279537  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:48.279621  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:48.313971  171911 cri.go:89] found id: ""
	I0903 23:44:48.313998  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.314006  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:48.314012  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:48.314076  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:48.349729  171911 cri.go:89] found id: ""
	I0903 23:44:48.349765  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.349773  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:48.349779  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:48.349843  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:48.384104  171911 cri.go:89] found id: ""
	I0903 23:44:48.384132  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.384140  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:48.384147  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:48.384210  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:48.418534  171911 cri.go:89] found id: ""
	I0903 23:44:48.418569  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.418581  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:48.418589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:48.418656  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:48.452604  171911 cri.go:89] found id: ""
	I0903 23:44:48.452632  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.452640  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:48.452647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:48.452711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:48.485587  171911 cri.go:89] found id: ""
	I0903 23:44:48.485618  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.485629  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:48.485636  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:48.485701  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:48.518840  171911 cri.go:89] found id: ""
	I0903 23:44:48.518865  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.518876  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:48.518890  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:48.518906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:48.566332  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:48.566368  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.580074  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:48.580103  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:48.646139  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:48.646163  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:48.646177  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:48.721508  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:48.721551  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.261729  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:51.277615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:51.277688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:51.311728  171911 cri.go:89] found id: ""
	I0903 23:44:51.311758  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.311767  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:51.311773  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:51.311841  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:51.346364  171911 cri.go:89] found id: ""
	I0903 23:44:51.346394  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.346402  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:51.346408  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:51.346467  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:51.380196  171911 cri.go:89] found id: ""
	I0903 23:44:51.380233  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.380249  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:51.380259  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:51.380331  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:51.414829  171911 cri.go:89] found id: ""
	I0903 23:44:51.414861  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.414869  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:51.414875  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:51.414943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:51.448741  171911 cri.go:89] found id: ""
	I0903 23:44:51.448779  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.448792  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:51.448801  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:51.448865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:51.484499  171911 cri.go:89] found id: ""
	I0903 23:44:51.484537  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.484545  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:51.484552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:51.484605  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:51.518538  171911 cri.go:89] found id: ""
	I0903 23:44:51.518568  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.518580  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:51.518589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:51.518649  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:51.560124  171911 cri.go:89] found id: ""
	I0903 23:44:51.560158  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.560168  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:51.560193  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:51.560207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:51.636716  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:51.636760  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.674322  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:51.674355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:51.723819  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:51.723856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:51.737446  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:51.737478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:51.800575  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.300746  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:54.317060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:54.317135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:54.350356  171911 cri.go:89] found id: ""
	I0903 23:44:54.350382  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.350389  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:54.350396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:54.350458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:54.386548  171911 cri.go:89] found id: ""
	I0903 23:44:54.386577  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.386586  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:54.386593  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:54.386647  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:54.423360  171911 cri.go:89] found id: ""
	I0903 23:44:54.423388  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.423395  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:54.423407  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:54.423458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:54.458673  171911 cri.go:89] found id: ""
	I0903 23:44:54.458701  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.458709  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:54.458716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:54.458781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:54.491692  171911 cri.go:89] found id: ""
	I0903 23:44:54.491726  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.491738  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:54.491746  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:54.491809  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:54.524500  171911 cri.go:89] found id: ""
	I0903 23:44:54.524530  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.524543  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:54.524550  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:54.524614  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:54.558644  171911 cri.go:89] found id: ""
	I0903 23:44:54.558676  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.558688  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:54.558696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:54.558773  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:54.592814  171911 cri.go:89] found id: ""
	I0903 23:44:54.592841  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.592851  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:54.592863  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:54.592879  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:54.642538  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:54.642572  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:54.656435  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:54.656468  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:54.721260  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.721286  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:54.721304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:54.798283  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:54.798323  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:57.337294  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:57.353760  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:57.353842  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:57.387108  171911 cri.go:89] found id: ""
	I0903 23:44:57.387136  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.387146  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:57.387153  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:57.387219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:57.421245  171911 cri.go:89] found id: ""
	I0903 23:44:57.421273  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.421283  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:57.421291  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:57.421367  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:57.455403  171911 cri.go:89] found id: ""
	I0903 23:44:57.455431  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.455441  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:57.455450  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:57.455510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:57.487825  171911 cri.go:89] found id: ""
	I0903 23:44:57.487860  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.487871  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:57.487880  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:57.487935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:57.522048  171911 cri.go:89] found id: ""
	I0903 23:44:57.522073  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.522081  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:57.522087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:57.522140  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:57.555520  171911 cri.go:89] found id: ""
	I0903 23:44:57.555545  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.555553  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:57.555560  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:57.555622  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:57.588895  171911 cri.go:89] found id: ""
	I0903 23:44:57.588924  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.588933  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:57.588941  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:57.589002  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:57.623152  171911 cri.go:89] found id: ""
	I0903 23:44:57.623190  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.623198  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:57.623207  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:57.623217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:57.672898  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:57.672938  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:57.686578  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:57.686611  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:57.750436  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:57.750467  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:57.750485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:57.830779  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:57.830829  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.371014  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:00.387297  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:00.387414  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:00.420632  171911 cri.go:89] found id: ""
	I0903 23:45:00.420662  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.420670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:00.420676  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:00.420729  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:00.453824  171911 cri.go:89] found id: ""
	I0903 23:45:00.453852  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.453860  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:00.453866  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:00.453917  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:00.488618  171911 cri.go:89] found id: ""
	I0903 23:45:00.488650  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.488661  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:00.488669  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:00.488738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:00.522545  171911 cri.go:89] found id: ""
	I0903 23:45:00.522579  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.522587  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:00.522595  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:00.522655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:00.555419  171911 cri.go:89] found id: ""
	I0903 23:45:00.555445  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.555453  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:00.555459  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:00.555515  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:00.588742  171911 cri.go:89] found id: ""
	I0903 23:45:00.588777  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.588790  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:00.588799  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:00.588876  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:00.621164  171911 cri.go:89] found id: ""
	I0903 23:45:00.621194  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.621205  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:00.621212  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:00.621287  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:00.652140  171911 cri.go:89] found id: ""
	I0903 23:45:00.652167  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.652178  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:00.652191  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:00.652206  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:00.733518  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:00.733560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.770455  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:00.770489  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:00.819129  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:00.819161  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:00.832460  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:00.832492  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:00.895930  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.397643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:03.414370  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:03.414441  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:03.448753  171911 cri.go:89] found id: ""
	I0903 23:45:03.448787  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.448795  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:03.448802  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:03.448860  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:03.484668  171911 cri.go:89] found id: ""
	I0903 23:45:03.484696  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.484703  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:03.484709  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:03.484763  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:03.517157  171911 cri.go:89] found id: ""
	I0903 23:45:03.517184  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.517191  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:03.517197  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:03.517250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:03.552220  171911 cri.go:89] found id: ""
	I0903 23:45:03.552246  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.552255  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:03.552262  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:03.552328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:03.585731  171911 cri.go:89] found id: ""
	I0903 23:45:03.585764  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.585774  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:03.585783  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:03.585854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:03.619396  171911 cri.go:89] found id: ""
	I0903 23:45:03.619425  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.619433  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:03.619439  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:03.619503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:03.653461  171911 cri.go:89] found id: ""
	I0903 23:45:03.653489  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.653500  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:03.653509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:03.653562  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:03.690075  171911 cri.go:89] found id: ""
	I0903 23:45:03.690102  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.690112  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:03.690123  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:03.690139  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:03.742271  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:03.742305  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:03.755513  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:03.755548  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:03.817702  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.817734  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:03.817758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:03.894336  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:03.894377  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:06.433897  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:06.450322  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:06.450386  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:06.482782  171911 cri.go:89] found id: ""
	I0903 23:45:06.482810  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.482818  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:06.482824  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:06.482878  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:06.516065  171911 cri.go:89] found id: ""
	I0903 23:45:06.516098  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.516106  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:06.516112  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:06.516164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:06.548668  171911 cri.go:89] found id: ""
	I0903 23:45:06.548695  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.548703  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:06.548710  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:06.548765  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:06.580287  171911 cri.go:89] found id: ""
	I0903 23:45:06.580316  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.580324  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:06.580331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:06.580385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:06.613698  171911 cri.go:89] found id: ""
	I0903 23:45:06.613728  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.613736  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:06.613742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:06.613798  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:06.648492  171911 cri.go:89] found id: ""
	I0903 23:45:06.648520  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.648531  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:06.648539  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:06.648591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:06.682079  171911 cri.go:89] found id: ""
	I0903 23:45:06.682105  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.682114  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:06.682123  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:06.682182  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:06.717523  171911 cri.go:89] found id: ""
	I0903 23:45:06.717551  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.717559  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:06.717568  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:06.717580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:06.766524  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:06.766557  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:06.779931  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:06.779960  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:06.843183  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:06.843204  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:06.843217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:06.919233  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:06.919270  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.456643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:09.475777  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:09.475855  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:09.516030  171911 cri.go:89] found id: ""
	I0903 23:45:09.516066  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.516078  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:09.516086  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:09.516155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:09.556025  171911 cri.go:89] found id: ""
	I0903 23:45:09.556058  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.556071  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:09.556080  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:09.556145  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:09.596343  171911 cri.go:89] found id: ""
	I0903 23:45:09.596375  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.596384  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:09.596393  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:09.596456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:09.634286  171911 cri.go:89] found id: ""
	I0903 23:45:09.634323  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.634330  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:09.634336  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:09.634387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:09.667579  171911 cri.go:89] found id: ""
	I0903 23:45:09.667617  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.667629  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:09.667637  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:09.667709  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:09.702631  171911 cri.go:89] found id: ""
	I0903 23:45:09.702661  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.702670  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:09.702677  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:09.702744  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:09.736481  171911 cri.go:89] found id: ""
	I0903 23:45:09.736513  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.736522  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:09.736528  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:09.736594  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:09.768392  171911 cri.go:89] found id: ""
	I0903 23:45:09.768420  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.768428  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:09.768438  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:09.768454  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.804233  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:09.804262  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:09.854916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:09.854951  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:09.868290  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:09.868326  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:09.937659  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:09.937686  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:09.937702  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:12.515352  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:12.532069  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:12.532138  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:12.566307  171911 cri.go:89] found id: ""
	I0903 23:45:12.566347  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.566356  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:12.566361  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:12.566413  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:12.600883  171911 cri.go:89] found id: ""
	I0903 23:45:12.600911  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.600919  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:12.600925  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:12.600976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:12.634831  171911 cri.go:89] found id: ""
	I0903 23:45:12.634860  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.634868  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:12.634874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:12.634932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:12.668965  171911 cri.go:89] found id: ""
	I0903 23:45:12.668993  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.669002  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:12.669008  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:12.669061  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:12.702632  171911 cri.go:89] found id: ""
	I0903 23:45:12.702662  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.702670  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:12.702676  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:12.702734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:12.736957  171911 cri.go:89] found id: ""
	I0903 23:45:12.736994  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.737005  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:12.737013  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:12.737096  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:12.769324  171911 cri.go:89] found id: ""
	I0903 23:45:12.769353  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.769361  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:12.769367  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:12.769433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:12.801706  171911 cri.go:89] found id: ""
	I0903 23:45:12.801731  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.801738  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:12.801747  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:12.801758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:12.850449  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:12.850485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:12.864235  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:12.864263  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:12.928347  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:12.928372  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:12.928385  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:13.002530  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:13.002569  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:15.541753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:15.558031  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:15.558098  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:15.590544  171911 cri.go:89] found id: ""
	I0903 23:45:15.590590  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.590608  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:15.590618  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:15.590681  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:15.623172  171911 cri.go:89] found id: ""
	I0903 23:45:15.623206  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.623214  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:15.623220  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:15.623271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:15.666374  171911 cri.go:89] found id: ""
	I0903 23:45:15.666413  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.666424  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:15.666432  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:15.666500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:15.700153  171911 cri.go:89] found id: ""
	I0903 23:45:15.700188  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.700196  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:15.700203  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:15.700258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:15.734346  171911 cri.go:89] found id: ""
	I0903 23:45:15.734379  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.734391  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:15.734401  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:15.734468  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:15.768125  171911 cri.go:89] found id: ""
	I0903 23:45:15.768151  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.768160  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:15.768166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:15.768219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:15.802055  171911 cri.go:89] found id: ""
	I0903 23:45:15.802085  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.802093  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:15.802101  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:15.802155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:15.835742  171911 cri.go:89] found id: ""
	I0903 23:45:15.835775  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.835785  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:15.835796  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:15.835809  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:15.887302  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:15.887339  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:15.900589  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:15.900616  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:15.963821  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:15.963850  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:15.963867  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:16.041873  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:16.041910  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:18.579975  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:18.596552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:18.596644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:18.637122  171911 cri.go:89] found id: ""
	I0903 23:45:18.637150  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.637159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:18.637168  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:18.637231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:18.683926  171911 cri.go:89] found id: ""
	I0903 23:45:18.683965  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.683976  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:18.683984  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:18.684143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:18.724297  171911 cri.go:89] found id: ""
	I0903 23:45:18.724326  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.724337  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:18.724356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:18.724424  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:18.767543  171911 cri.go:89] found id: ""
	I0903 23:45:18.767585  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.767594  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:18.767601  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:18.767666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:18.808984  171911 cri.go:89] found id: ""
	I0903 23:45:18.809023  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.809034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:18.809042  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:18.809125  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:18.843616  171911 cri.go:89] found id: ""
	I0903 23:45:18.843651  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.843662  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:18.843670  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:18.843772  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:18.878089  171911 cri.go:89] found id: ""
	I0903 23:45:18.878117  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.878125  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:18.878131  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:18.878199  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:18.913557  171911 cri.go:89] found id: ""
	I0903 23:45:18.913590  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.913602  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:18.913613  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:18.913629  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:18.964473  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:18.964511  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:18.977841  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:18.977868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:19.041151  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:19.041175  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:19.041190  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:19.114112  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:19.114166  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:21.655099  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:21.671751  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:21.671826  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:21.705950  171911 cri.go:89] found id: ""
	I0903 23:45:21.705985  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.705993  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:21.706000  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:21.706066  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:21.745098  171911 cri.go:89] found id: ""
	I0903 23:45:21.745125  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.745134  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:21.745139  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:21.745212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:21.787214  171911 cri.go:89] found id: ""
	I0903 23:45:21.787246  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.787259  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:21.787267  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:21.787340  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:21.825966  171911 cri.go:89] found id: ""
	I0903 23:45:21.825999  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.826009  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:21.826023  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:21.826094  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:21.858874  171911 cri.go:89] found id: ""
	I0903 23:45:21.858909  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.858920  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:21.858928  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:21.858990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:21.892820  171911 cri.go:89] found id: ""
	I0903 23:45:21.892851  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.892862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:21.892869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:21.892938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:21.927139  171911 cri.go:89] found id: ""
	I0903 23:45:21.927167  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.927174  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:21.927180  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:21.927242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:21.961202  171911 cri.go:89] found id: ""
	I0903 23:45:21.961235  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.961247  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:21.961259  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:21.961274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:22.034253  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:22.034307  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:22.081973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:22.082014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:22.136441  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:22.136507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:22.153988  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:22.154027  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:22.218718  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:24.718932  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:24.735304  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:24.735366  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:24.769484  171911 cri.go:89] found id: ""
	I0903 23:45:24.769526  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.769534  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:24.769541  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:24.769602  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:24.804478  171911 cri.go:89] found id: ""
	I0903 23:45:24.804512  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.804523  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:24.804531  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:24.804616  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:24.839941  171911 cri.go:89] found id: ""
	I0903 23:45:24.839967  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.839974  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:24.839980  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:24.840043  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:24.872589  171911 cri.go:89] found id: ""
	I0903 23:45:24.872631  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.872641  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:24.872650  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:24.872713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:24.906281  171911 cri.go:89] found id: ""
	I0903 23:45:24.906312  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.906321  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:24.906327  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:24.906381  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:24.940855  171911 cri.go:89] found id: ""
	I0903 23:45:24.940891  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.940902  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:24.940910  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:24.940979  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:24.973046  171911 cri.go:89] found id: ""
	I0903 23:45:24.973075  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.973084  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:24.973091  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:24.973160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:25.006986  171911 cri.go:89] found id: ""
	I0903 23:45:25.007015  171911 logs.go:282] 0 containers: []
	W0903 23:45:25.007026  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:25.007038  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:25.007054  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:25.057037  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:25.057075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:25.070713  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:25.070741  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:25.135104  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:25.135129  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:25.135142  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:25.211776  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:25.211816  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:27.750263  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:27.766962  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:27.767039  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:27.809102  171911 cri.go:89] found id: ""
	I0903 23:45:27.809134  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.809142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:27.809149  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:27.809201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:27.852918  171911 cri.go:89] found id: ""
	I0903 23:45:27.852946  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.852954  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:27.852961  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:27.853025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:27.908523  171911 cri.go:89] found id: ""
	I0903 23:45:27.908554  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.908561  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:27.908566  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:27.908627  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:27.941105  171911 cri.go:89] found id: ""
	I0903 23:45:27.941136  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.941144  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:27.941150  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:27.941204  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:27.974030  171911 cri.go:89] found id: ""
	I0903 23:45:27.974064  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.974075  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:27.974082  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:27.974149  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:28.007829  171911 cri.go:89] found id: ""
	I0903 23:45:28.007857  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.007867  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:28.007874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:28.007936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:28.050575  171911 cri.go:89] found id: ""
	I0903 23:45:28.050614  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.050622  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:28.050629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:28.050684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:28.085777  171911 cri.go:89] found id: ""
	I0903 23:45:28.085809  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.085817  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:28.085826  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:28.085838  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:28.150751  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:28.150778  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:28.150792  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:28.223955  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:28.224000  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:28.262972  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:28.262999  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:28.311545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:28.311580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:30.827970  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:30.844742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:30.844805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:30.880412  171911 cri.go:89] found id: ""
	I0903 23:45:30.880453  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.880468  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:30.880476  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:30.880549  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:30.913830  171911 cri.go:89] found id: ""
	I0903 23:45:30.913858  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.913867  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:30.913872  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:30.913935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:30.946611  171911 cri.go:89] found id: ""
	I0903 23:45:30.946641  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.946650  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:30.946656  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:30.946711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:30.980152  171911 cri.go:89] found id: ""
	I0903 23:45:30.980183  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.980193  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:30.980201  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:30.980271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:31.015814  171911 cri.go:89] found id: ""
	I0903 23:45:31.015845  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.015856  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:31.015863  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:31.015932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:31.050513  171911 cri.go:89] found id: ""
	I0903 23:45:31.050543  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.050555  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:31.050562  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:31.050636  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:31.083766  171911 cri.go:89] found id: ""
	I0903 23:45:31.083791  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.083798  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:31.083805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:31.083864  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:31.117858  171911 cri.go:89] found id: ""
	I0903 23:45:31.117886  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.117893  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:31.117903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:31.117922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:31.131404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:31.131433  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:31.195245  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:31.195275  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:31.195295  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:31.271630  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:31.271671  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:31.310746  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:31.310780  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:33.861848  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:33.878672  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:33.878742  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:33.911344  171911 cri.go:89] found id: ""
	I0903 23:45:33.911377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.911388  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:33.911396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:33.911458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:33.948348  171911 cri.go:89] found id: ""
	I0903 23:45:33.948377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.948385  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:33.948391  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:33.948455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:33.981680  171911 cri.go:89] found id: ""
	I0903 23:45:33.981710  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.981722  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:33.981730  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:33.981796  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:34.013721  171911 cri.go:89] found id: ""
	I0903 23:45:34.013747  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.013755  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:34.013762  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:34.013827  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:34.047612  171911 cri.go:89] found id: ""
	I0903 23:45:34.047644  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.047654  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:34.047661  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:34.047720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:34.081680  171911 cri.go:89] found id: ""
	I0903 23:45:34.081714  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.081725  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:34.081734  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:34.081802  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:34.117208  171911 cri.go:89] found id: ""
	I0903 23:45:34.117247  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.117258  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:34.117268  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:34.117339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:34.150598  171911 cri.go:89] found id: ""
	I0903 23:45:34.150626  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.150634  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:34.150644  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:34.150655  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:34.199612  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:34.199652  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:34.213484  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:34.213513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:34.276337  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:34.276358  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:34.276380  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:34.347780  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:34.347822  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:36.885583  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:36.902360  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:36.902439  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:36.936103  171911 cri.go:89] found id: ""
	I0903 23:45:36.936133  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.936142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:36.936148  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:36.936212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:36.969146  171911 cri.go:89] found id: ""
	I0903 23:45:36.969173  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.969180  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:36.969186  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:36.969248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:37.002284  171911 cri.go:89] found id: ""
	I0903 23:45:37.002314  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.002324  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:37.002331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:37.002385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:37.034701  171911 cri.go:89] found id: ""
	I0903 23:45:37.034731  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.034741  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:37.034749  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:37.034815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:37.067766  171911 cri.go:89] found id: ""
	I0903 23:45:37.067798  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.067810  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:37.067819  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:37.067887  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:37.100402  171911 cri.go:89] found id: ""
	I0903 23:45:37.100431  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.100439  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:37.100445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:37.100495  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:37.134783  171911 cri.go:89] found id: ""
	I0903 23:45:37.134814  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.134822  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:37.134828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:37.134892  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:37.168715  171911 cri.go:89] found id: ""
	I0903 23:45:37.168746  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.168753  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:37.168768  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:37.168781  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:37.239216  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:37.239259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:37.278941  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:37.278977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:37.327168  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:37.327207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:37.340806  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:37.340837  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:37.402460  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:39.902717  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:39.919140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:39.919211  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:39.952379  171911 cri.go:89] found id: ""
	I0903 23:45:39.952407  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.952421  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:39.952428  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:39.952510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:39.986646  171911 cri.go:89] found id: ""
	I0903 23:45:39.986674  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.986682  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:39.986688  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:39.986750  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:40.019946  171911 cri.go:89] found id: ""
	I0903 23:45:40.019984  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.019995  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:40.020004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:40.020075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:40.051084  171911 cri.go:89] found id: ""
	I0903 23:45:40.051120  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.051131  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:40.051139  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:40.051198  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:40.084431  171911 cri.go:89] found id: ""
	I0903 23:45:40.084471  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.084485  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:40.084493  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:40.084590  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:40.117261  171911 cri.go:89] found id: ""
	I0903 23:45:40.117289  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.117298  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:40.117305  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:40.117356  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:40.149940  171911 cri.go:89] found id: ""
	I0903 23:45:40.149976  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.149983  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:40.149989  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:40.150049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:40.185787  171911 cri.go:89] found id: ""
	I0903 23:45:40.185819  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.185828  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:40.185838  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:40.185849  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:40.236114  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:40.236151  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:40.249810  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:40.249842  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:40.315354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:40.315385  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:40.315402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:40.391973  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:40.392014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:42.929523  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:42.946789  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:42.946852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:42.981168  171911 cri.go:89] found id: ""
	I0903 23:45:42.981202  171911 logs.go:282] 0 containers: []
	W0903 23:45:42.981214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:42.981223  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:42.981290  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:43.016160  171911 cri.go:89] found id: ""
	I0903 23:45:43.016191  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.016202  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:43.016210  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:43.016277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:43.052374  171911 cri.go:89] found id: ""
	I0903 23:45:43.052407  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.052415  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:43.052421  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:43.052490  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:43.087466  171911 cri.go:89] found id: ""
	I0903 23:45:43.087492  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.087499  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:43.087506  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:43.087578  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:43.121733  171911 cri.go:89] found id: ""
	I0903 23:45:43.121770  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.121780  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:43.121786  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:43.121852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:43.155089  171911 cri.go:89] found id: ""
	I0903 23:45:43.155120  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.155129  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:43.155136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:43.155208  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:43.187081  171911 cri.go:89] found id: ""
	I0903 23:45:43.187113  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.187124  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:43.187132  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:43.187206  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:43.221988  171911 cri.go:89] found id: ""
	I0903 23:45:43.222020  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.222027  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:43.222037  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:43.222048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:43.274015  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:43.274053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:43.288204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:43.288237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:43.352172  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:43.352197  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:43.352214  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:43.429363  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:43.429416  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:45.967138  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:45.984430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:45.984508  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:46.018620  171911 cri.go:89] found id: ""
	I0903 23:45:46.018656  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.018670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:46.018680  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:46.018736  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:46.052857  171911 cri.go:89] found id: ""
	I0903 23:45:46.052896  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.052908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:46.052917  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:46.052992  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:46.086760  171911 cri.go:89] found id: ""
	I0903 23:45:46.086802  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.086815  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:46.086824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:46.086897  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:46.122770  171911 cri.go:89] found id: ""
	I0903 23:45:46.122808  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.122821  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:46.122831  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:46.122898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:46.156632  171911 cri.go:89] found id: ""
	I0903 23:45:46.156666  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.156677  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:46.156684  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:46.156748  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:46.189167  171911 cri.go:89] found id: ""
	I0903 23:45:46.189196  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.189204  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:46.189211  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:46.189281  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:46.221676  171911 cri.go:89] found id: ""
	I0903 23:45:46.221703  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.221710  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:46.221716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:46.221781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:46.255950  171911 cri.go:89] found id: ""
	I0903 23:45:46.255989  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.256001  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:46.256012  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:46.256026  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:46.320856  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:46.320887  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:46.320904  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:46.395448  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:46.395495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:46.433348  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:46.433402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:46.483558  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:46.483600  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:48.997604  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:49.014515  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:49.014584  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:49.049009  171911 cri.go:89] found id: ""
	I0903 23:45:49.049041  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.049049  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:49.049055  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:49.049107  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:49.082752  171911 cri.go:89] found id: ""
	I0903 23:45:49.082784  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.082792  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:49.082799  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:49.082853  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:49.117820  171911 cri.go:89] found id: ""
	I0903 23:45:49.117851  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.117861  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:49.117869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:49.117937  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:49.152630  171911 cri.go:89] found id: ""
	I0903 23:45:49.152662  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.152673  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:49.152681  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:49.152746  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:49.186660  171911 cri.go:89] found id: ""
	I0903 23:45:49.186693  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.186705  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:49.186715  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:49.186787  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:49.221850  171911 cri.go:89] found id: ""
	I0903 23:45:49.221879  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.221887  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:49.221894  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:49.221947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:49.256272  171911 cri.go:89] found id: ""
	I0903 23:45:49.256301  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.256309  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:49.256315  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:49.256378  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:49.292385  171911 cri.go:89] found id: ""
	I0903 23:45:49.292414  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.292422  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:49.292432  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:49.292446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:49.343070  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:49.343109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:49.356910  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:49.356940  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:49.423437  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:49.423471  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:49.423486  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:49.494062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:49.494108  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.034573  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:52.051154  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:52.051217  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:52.088178  171911 cri.go:89] found id: ""
	I0903 23:45:52.088205  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.088214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:52.088222  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:52.088284  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:52.122560  171911 cri.go:89] found id: ""
	I0903 23:45:52.122595  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.122606  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:52.122617  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:52.122687  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:52.154593  171911 cri.go:89] found id: ""
	I0903 23:45:52.154628  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.154636  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:52.154646  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:52.154700  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:52.188028  171911 cri.go:89] found id: ""
	I0903 23:45:52.188066  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.188079  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:52.188088  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:52.188162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:52.223140  171911 cri.go:89] found id: ""
	I0903 23:45:52.223165  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.223172  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:52.223178  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:52.223231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:52.267817  171911 cri.go:89] found id: ""
	I0903 23:45:52.267851  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.267862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:52.267869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:52.267936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:52.302187  171911 cri.go:89] found id: ""
	I0903 23:45:52.302224  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.302236  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:52.302245  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:52.302315  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:52.336716  171911 cri.go:89] found id: ""
	I0903 23:45:52.336742  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.336750  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:52.336761  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:52.336776  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.376759  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:52.376793  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:52.424230  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:52.424274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:52.438819  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:52.438850  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:52.505537  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:52.505562  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:52.505577  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.082568  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:55.100018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:55.100095  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:55.135160  171911 cri.go:89] found id: ""
	I0903 23:45:55.135189  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.135201  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:55.135210  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:55.135268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:55.175763  171911 cri.go:89] found id: ""
	I0903 23:45:55.175800  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.175808  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:55.175814  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:55.175875  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:55.209987  171911 cri.go:89] found id: ""
	I0903 23:45:55.210015  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.210024  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:55.210030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:55.210090  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:55.244587  171911 cri.go:89] found id: ""
	I0903 23:45:55.244615  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.244623  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:55.244630  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:55.244699  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:55.279333  171911 cri.go:89] found id: ""
	I0903 23:45:55.279363  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.279373  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:55.279381  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:55.279451  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:55.313220  171911 cri.go:89] found id: ""
	I0903 23:45:55.313263  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.313273  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:55.313281  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:55.313355  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:55.348181  171911 cri.go:89] found id: ""
	I0903 23:45:55.348215  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.348224  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:55.348230  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:55.348299  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:55.381456  171911 cri.go:89] found id: ""
	I0903 23:45:55.381482  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.381490  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:55.381500  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:55.381516  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:55.433817  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:55.433856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:55.447772  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:55.447804  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:55.513762  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:55.513795  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:55.513812  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.585576  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:55.585615  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.125483  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:58.142430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:58.142505  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:58.177668  171911 cri.go:89] found id: ""
	I0903 23:45:58.177697  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.177709  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:58.177717  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:58.177791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:58.212662  171911 cri.go:89] found id: ""
	I0903 23:45:58.212688  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.212697  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:58.212705  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:58.212766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:58.248588  171911 cri.go:89] found id: ""
	I0903 23:45:58.248616  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.248623  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:58.248629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:58.248684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:58.283427  171911 cri.go:89] found id: ""
	I0903 23:45:58.283459  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.283468  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:58.283475  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:58.283537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:58.319164  171911 cri.go:89] found id: ""
	I0903 23:45:58.319195  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.319203  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:58.319209  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:58.319265  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:58.354722  171911 cri.go:89] found id: ""
	I0903 23:45:58.354750  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.354758  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:58.354764  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:58.354816  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:58.389144  171911 cri.go:89] found id: ""
	I0903 23:45:58.389171  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.389181  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:58.389187  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:58.389240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:58.423096  171911 cri.go:89] found id: ""
	I0903 23:45:58.423125  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.423134  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:58.423144  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:58.423158  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:58.500171  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:58.500208  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.538635  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:58.538663  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:58.584846  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:58.584882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:58.598653  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:58.598685  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:58.666401  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:01.168834  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:01.185866  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:01.185953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:01.219970  171911 cri.go:89] found id: ""
	I0903 23:46:01.219998  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.220006  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:01.220012  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:01.220075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:01.253640  171911 cri.go:89] found id: ""
	I0903 23:46:01.253673  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.253683  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:01.253691  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:01.253756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:01.288533  171911 cri.go:89] found id: ""
	I0903 23:46:01.288564  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.288576  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:01.288584  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:01.288655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:01.323184  171911 cri.go:89] found id: ""
	I0903 23:46:01.323217  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.323226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:01.323232  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:01.323289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:01.356988  171911 cri.go:89] found id: ""
	I0903 23:46:01.357023  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.357034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:01.357045  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:01.357106  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:01.390140  171911 cri.go:89] found id: ""
	I0903 23:46:01.390168  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.390176  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:01.390182  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:01.390247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:01.423178  171911 cri.go:89] found id: ""
	I0903 23:46:01.423207  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.423215  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:01.423222  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:01.423285  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:01.461100  171911 cri.go:89] found id: ""
	I0903 23:46:01.461138  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.461148  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:01.461160  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:01.461185  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:01.535231  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:01.535274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:01.574120  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:01.574154  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:01.621782  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:01.621817  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:01.642205  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:01.642246  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:01.707505  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.207758  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:04.225090  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:04.225162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:04.259542  171911 cri.go:89] found id: ""
	I0903 23:46:04.259573  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.259580  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:04.259586  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:04.259638  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:04.294395  171911 cri.go:89] found id: ""
	I0903 23:46:04.294422  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.294430  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:04.294436  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:04.294488  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:04.329086  171911 cri.go:89] found id: ""
	I0903 23:46:04.329125  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.329134  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:04.329140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:04.329194  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:04.362247  171911 cri.go:89] found id: ""
	I0903 23:46:04.362278  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.362286  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:04.362292  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:04.362348  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:04.397700  171911 cri.go:89] found id: ""
	I0903 23:46:04.397731  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.397739  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:04.397745  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:04.397800  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:04.431332  171911 cri.go:89] found id: ""
	I0903 23:46:04.431360  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.431368  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:04.431374  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:04.431425  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:04.465005  171911 cri.go:89] found id: ""
	I0903 23:46:04.465035  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.465042  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:04.465049  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:04.465108  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:04.500441  171911 cri.go:89] found id: ""
	I0903 23:46:04.500470  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.500478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:04.500487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:04.500505  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:04.538356  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:04.538389  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:04.585363  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:04.585412  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:04.602519  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:04.602553  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:04.676451  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.676474  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:04.676488  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.260862  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:07.278149  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:07.278214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:07.320356  171911 cri.go:89] found id: ""
	I0903 23:46:07.320393  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.320405  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:07.320412  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:07.320498  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:07.355032  171911 cri.go:89] found id: ""
	I0903 23:46:07.355063  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.355074  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:07.355090  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:07.355155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:07.391094  171911 cri.go:89] found id: ""
	I0903 23:46:07.391119  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.391129  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:07.391136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:07.391195  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:07.431946  171911 cri.go:89] found id: ""
	I0903 23:46:07.431979  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.431988  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:07.431994  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:07.432049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:07.470935  171911 cri.go:89] found id: ""
	I0903 23:46:07.470965  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.470974  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:07.470981  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:07.471035  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:07.507140  171911 cri.go:89] found id: ""
	I0903 23:46:07.507171  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.507179  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:07.507185  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:07.507243  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:07.542978  171911 cri.go:89] found id: ""
	I0903 23:46:07.543007  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.543014  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:07.543022  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:07.543083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:07.578836  171911 cri.go:89] found id: ""
	I0903 23:46:07.578867  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.578875  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:07.578885  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:07.578911  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:07.625808  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:07.625852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:07.639685  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:07.639719  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:07.705947  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:07.705975  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:07.705994  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.782360  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:07.782406  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:10.331295  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:10.348405  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:10.348479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:10.381149  171911 cri.go:89] found id: ""
	I0903 23:46:10.381178  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.381185  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:10.381192  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:10.381254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:10.414056  171911 cri.go:89] found id: ""
	I0903 23:46:10.414096  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.414108  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:10.414117  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:10.414174  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:10.449437  171911 cri.go:89] found id: ""
	I0903 23:46:10.449467  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.449478  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:10.449485  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:10.449568  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:10.485019  171911 cri.go:89] found id: ""
	I0903 23:46:10.485047  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.485058  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:10.485064  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:10.485115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:10.517909  171911 cri.go:89] found id: ""
	I0903 23:46:10.517943  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.517955  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:10.517963  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:10.518037  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:10.551948  171911 cri.go:89] found id: ""
	I0903 23:46:10.551976  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.551984  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:10.551990  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:10.552053  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:10.586008  171911 cri.go:89] found id: ""
	I0903 23:46:10.586042  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.586052  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:10.586060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:10.586130  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:10.621028  171911 cri.go:89] found id: ""
	I0903 23:46:10.621054  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.621062  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:10.621073  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:10.621122  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:10.670328  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:10.670367  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:10.684168  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:10.684196  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:10.750643  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:10.750664  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:10.750678  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:10.824493  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:10.824545  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.375299  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:13.392043  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:13.392129  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:13.427112  171911 cri.go:89] found id: ""
	I0903 23:46:13.427149  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.427159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:13.427167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:13.427240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:13.462866  171911 cri.go:89] found id: ""
	I0903 23:46:13.462900  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.462908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:13.462915  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:13.462976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:13.498341  171911 cri.go:89] found id: ""
	I0903 23:46:13.498372  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.498381  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:13.498387  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:13.498440  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:13.543600  171911 cri.go:89] found id: ""
	I0903 23:46:13.543627  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.543636  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:13.543642  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:13.543696  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:13.578615  171911 cri.go:89] found id: ""
	I0903 23:46:13.578643  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.578651  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:13.578657  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:13.578720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:13.613164  171911 cri.go:89] found id: ""
	I0903 23:46:13.613190  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.613197  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:13.613204  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:13.613268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:13.648193  171911 cri.go:89] found id: ""
	I0903 23:46:13.648219  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.648227  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:13.648235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:13.648289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:13.692585  171911 cri.go:89] found id: ""
	I0903 23:46:13.692611  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.692619  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:13.692630  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:13.692649  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:13.709447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:13.709475  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:13.787419  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:13.787450  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:13.787466  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:13.876087  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:13.876121  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.922854  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:13.922882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.471424  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:16.489172  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:16.489260  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:16.523832  171911 cri.go:89] found id: ""
	I0903 23:46:16.523860  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.523867  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:16.523884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:16.523938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:16.561012  171911 cri.go:89] found id: ""
	I0903 23:46:16.561043  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.561051  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:16.561057  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:16.561112  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:16.595123  171911 cri.go:89] found id: ""
	I0903 23:46:16.595149  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.595156  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:16.595161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:16.595214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:16.629844  171911 cri.go:89] found id: ""
	I0903 23:46:16.629879  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.629887  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:16.629893  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:16.629946  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:16.665052  171911 cri.go:89] found id: ""
	I0903 23:46:16.665081  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.665089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:16.665103  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:16.665176  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:16.699559  171911 cri.go:89] found id: ""
	I0903 23:46:16.699591  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.699599  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:16.699607  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:16.699670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:16.734191  171911 cri.go:89] found id: ""
	I0903 23:46:16.734221  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.734229  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:16.734235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:16.734328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:16.770088  171911 cri.go:89] found id: ""
	I0903 23:46:16.770117  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.770125  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:16.770135  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:16.770150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.818779  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:16.818821  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:16.833000  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:16.833028  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:16.896259  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:16.896283  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:16.896301  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:16.973287  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:16.973330  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:19.513618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:19.533892  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:19.533986  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:19.575679  171911 cri.go:89] found id: ""
	I0903 23:46:19.575712  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.575722  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:19.575731  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:19.575803  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:19.623477  171911 cri.go:89] found id: ""
	I0903 23:46:19.623509  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.623517  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:19.623524  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:19.623592  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:19.663676  171911 cri.go:89] found id: ""
	I0903 23:46:19.663709  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.663718  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:19.663725  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:19.663792  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:19.698413  171911 cri.go:89] found id: ""
	I0903 23:46:19.698457  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.698466  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:19.698473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:19.698545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:19.734009  171911 cri.go:89] found id: ""
	I0903 23:46:19.734043  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.734051  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:19.734057  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:19.734124  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:19.770645  171911 cri.go:89] found id: ""
	I0903 23:46:19.770674  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.770682  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:19.770688  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:19.770749  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:19.805002  171911 cri.go:89] found id: ""
	I0903 23:46:19.805039  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.805051  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:19.805062  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:19.805134  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:19.839613  171911 cri.go:89] found id: ""
	I0903 23:46:19.839649  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.839659  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:19.839672  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:19.839687  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:19.892825  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:19.892868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:19.907172  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:19.907215  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:19.972520  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:19.972549  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:19.972563  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:20.047246  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:20.047313  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:22.586936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:22.603850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:22.603927  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:22.638907  171911 cri.go:89] found id: ""
	I0903 23:46:22.638936  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.638945  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:22.638954  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:22.639025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:22.674519  171911 cri.go:89] found id: ""
	I0903 23:46:22.674550  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.674557  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:22.674563  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:22.674623  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:22.709223  171911 cri.go:89] found id: ""
	I0903 23:46:22.709256  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.709267  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:22.709274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:22.709343  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:22.744699  171911 cri.go:89] found id: ""
	I0903 23:46:22.744732  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.744742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:22.744748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:22.744801  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:22.780192  171911 cri.go:89] found id: ""
	I0903 23:46:22.780226  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.780234  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:22.780240  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:22.780296  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:22.814575  171911 cri.go:89] found id: ""
	I0903 23:46:22.814606  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.814615  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:22.814621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:22.814674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:22.851385  171911 cri.go:89] found id: ""
	I0903 23:46:22.851415  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.851423  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:22.851429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:22.851480  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:22.884676  171911 cri.go:89] found id: ""
	I0903 23:46:22.884705  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.884713  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:22.884723  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:22.884734  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:22.935185  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:22.935223  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:22.949406  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:22.949442  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:23.012847  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:23.012877  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:23.012895  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:23.084409  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:23.084455  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:25.631753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:25.651358  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:25.651431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:25.685485  171911 cri.go:89] found id: ""
	I0903 23:46:25.685514  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.685523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:25.685528  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:25.685591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:25.720765  171911 cri.go:89] found id: ""
	I0903 23:46:25.720796  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.720804  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:25.720810  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:25.720867  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:25.754626  171911 cri.go:89] found id: ""
	I0903 23:46:25.754659  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.754670  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:25.754678  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:25.754731  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:25.789362  171911 cri.go:89] found id: ""
	I0903 23:46:25.789411  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.789421  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:25.789429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:25.789497  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:25.826469  171911 cri.go:89] found id: ""
	I0903 23:46:25.826502  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.826511  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:25.826519  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:25.826582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:25.861006  171911 cri.go:89] found id: ""
	I0903 23:46:25.861045  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.861057  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:25.861066  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:25.861141  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:25.895640  171911 cri.go:89] found id: ""
	I0903 23:46:25.895676  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.895687  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:25.895696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:25.895766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:25.930858  171911 cri.go:89] found id: ""
	I0903 23:46:25.930886  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.930894  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:25.930903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:25.930917  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:25.945023  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:25.945048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:26.011367  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:26.011401  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:26.011419  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:26.088648  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:26.088697  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:26.127560  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:26.127595  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:28.679659  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:28.696950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:28.697030  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:28.730995  171911 cri.go:89] found id: ""
	I0903 23:46:28.731026  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.731039  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:28.731047  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:28.731121  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:28.765348  171911 cri.go:89] found id: ""
	I0903 23:46:28.765377  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.765396  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:28.765404  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:28.765471  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:28.801427  171911 cri.go:89] found id: ""
	I0903 23:46:28.801459  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.801470  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:28.801478  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:28.801545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:28.836740  171911 cri.go:89] found id: ""
	I0903 23:46:28.836766  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.836775  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:28.836781  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:28.836865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:28.872484  171911 cri.go:89] found id: ""
	I0903 23:46:28.872517  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.872528  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:28.872538  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:28.872619  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:28.906796  171911 cri.go:89] found id: ""
	I0903 23:46:28.906840  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.906854  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:28.906864  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:28.906936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:28.941330  171911 cri.go:89] found id: ""
	I0903 23:46:28.941359  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.941367  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:28.941373  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:28.941447  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:28.975273  171911 cri.go:89] found id: ""
	I0903 23:46:28.975304  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.975316  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:28.975328  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:28.975351  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:29.013344  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:29.013374  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:29.062906  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:29.062943  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:29.077068  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:29.077094  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:29.141017  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:29.141041  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:29.141059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:31.720110  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:31.737478  171911 kubeadm.go:593] duration metric: took 4m4.418875365s to restartPrimaryControlPlane
	W0903 23:46:31.737562  171911 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0903 23:46:31.737592  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:46:36.182110  171911 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.444484741s)
	I0903 23:46:36.182205  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:46:36.197763  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:46:36.209295  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:46:36.220561  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:46:36.220584  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:46:36.220630  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:46:36.231194  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:46:36.231261  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:46:36.242263  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:46:36.252204  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:46:36.252278  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:46:36.263654  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.274160  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:46:36.274216  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.285535  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:46:36.296495  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:46:36.296566  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:46:36.308036  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:46:36.376723  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:46:36.376807  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:46:36.507237  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:46:36.507356  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:46:36.507451  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:46:36.676775  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:46:36.678771  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:46:36.678910  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:46:36.679002  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:46:36.679121  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:46:36.679204  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:46:36.679317  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:46:36.679385  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:46:36.679592  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:46:36.680075  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:46:36.680443  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:46:36.680690  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:46:36.680741  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:46:36.680801  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:46:37.040729  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:46:37.327107  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:46:37.592932  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:46:37.842405  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:46:37.860457  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:46:37.861477  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:46:37.861541  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:46:38.009088  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:46:38.010918  171911 out.go:252]   - Booting up control plane ...
	I0903 23:46:38.011062  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:46:38.018027  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:46:38.018106  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:46:38.018634  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:46:38.023296  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:47:18.025738  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:47:18.026296  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:18.026552  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:23.027174  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:23.027478  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:33.028031  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:33.028314  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:53.028650  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:53.028911  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031053  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:48:33.031367  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031406  171911 kubeadm.go:310] 
	I0903 23:48:33.031457  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:48:33.031522  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:48:33.031531  171911 kubeadm.go:310] 
	I0903 23:48:33.031571  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:48:33.031621  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:48:33.031747  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:48:33.031758  171911 kubeadm.go:310] 
	I0903 23:48:33.031898  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:48:33.031946  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:48:33.032002  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:48:33.032011  171911 kubeadm.go:310] 
	I0903 23:48:33.032171  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:48:33.032298  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:48:33.032308  171911 kubeadm.go:310] 
	I0903 23:48:33.032463  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:48:33.032612  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:48:33.032693  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:48:33.032780  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:48:33.032797  171911 kubeadm.go:310] 
	I0903 23:48:33.033539  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:48:33.033643  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:48:33.033735  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0903 23:48:33.033908  171911 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:48:33.033966  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:48:33.484811  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:48:33.501986  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:48:33.513610  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:48:33.513635  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:48:33.513694  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:48:33.524062  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:48:33.524128  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:48:33.534922  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:48:33.544314  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:48:33.544364  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:48:33.555345  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.565515  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:48:33.565578  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.576111  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:48:33.586276  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:48:33.586335  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:48:33.597298  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:48:33.791164  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:50:29.735983  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:50:29.736108  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:50:29.738473  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:50:29.738539  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:50:29.738632  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:50:29.738777  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:50:29.738908  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:50:29.738994  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:50:29.740823  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:50:29.740897  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:50:29.740956  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:50:29.741026  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:50:29.741099  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:50:29.741175  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:50:29.741225  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:50:29.741281  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:50:29.741336  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:50:29.741423  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:50:29.741518  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:50:29.741593  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:50:29.741669  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:50:29.741746  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:50:29.741831  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:50:29.741921  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:50:29.742004  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:50:29.742142  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:50:29.742267  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:50:29.742339  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:50:29.742442  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:50:29.744016  171911 out.go:252]   - Booting up control plane ...
	I0903 23:50:29.744169  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:50:29.744283  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:50:29.744364  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:50:29.744481  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:50:29.744722  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:50:29.744772  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:50:29.744856  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745144  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745256  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745481  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745588  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745791  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745882  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746079  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746151  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746327  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746336  171911 kubeadm.go:310] 
	I0903 23:50:29.746385  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:50:29.746439  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:50:29.746449  171911 kubeadm.go:310] 
	I0903 23:50:29.746505  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:50:29.746554  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:50:29.746678  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:50:29.746686  171911 kubeadm.go:310] 
	I0903 23:50:29.746808  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:50:29.746856  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:50:29.746908  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:50:29.746918  171911 kubeadm.go:310] 
	I0903 23:50:29.747078  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:50:29.747201  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:50:29.747208  171911 kubeadm.go:310] 
	I0903 23:50:29.747368  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:50:29.747487  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:50:29.747603  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:50:29.747684  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:50:29.747736  171911 kubeadm.go:310] 
	I0903 23:50:29.747765  171911 kubeadm.go:394] duration metric: took 8m2.477240692s to StartCluster
	I0903 23:50:29.747828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:50:29.747896  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:50:29.786098  171911 cri.go:89] found id: ""
	I0903 23:50:29.786144  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.786162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:50:29.786169  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:50:29.786251  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:50:29.819064  171911 cri.go:89] found id: ""
	I0903 23:50:29.819095  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.819103  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:50:29.819109  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:50:29.819164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:50:29.853192  171911 cri.go:89] found id: ""
	I0903 23:50:29.853223  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.853247  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:50:29.853255  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:50:29.853324  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:50:29.885949  171911 cri.go:89] found id: ""
	I0903 23:50:29.885979  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.885991  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:50:29.885999  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:50:29.886051  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:50:29.920423  171911 cri.go:89] found id: ""
	I0903 23:50:29.920451  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.920458  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:50:29.920464  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:50:29.920516  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:50:29.955106  171911 cri.go:89] found id: ""
	I0903 23:50:29.955142  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.955153  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:50:29.955161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:50:29.955241  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:50:29.988125  171911 cri.go:89] found id: ""
	I0903 23:50:29.988151  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.988159  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:50:29.988166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:50:29.988220  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:50:30.022768  171911 cri.go:89] found id: ""
	I0903 23:50:30.022795  171911 logs.go:282] 0 containers: []
	W0903 23:50:30.022803  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:50:30.022813  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:50:30.022828  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:50:30.059016  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:50:30.059049  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:50:30.108030  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:50:30.108065  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:50:30.121879  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:50:30.121906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:50:30.190324  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:50:30.190349  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:50:30.190362  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0903 23:50:30.296724  171911 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:50:30.296816  171911 out.go:285] * 
	W0903 23:50:30.296931  171911 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.296951  171911 out.go:285] * 
	W0903 23:50:30.299691  171911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:50:30.303743  171911 out.go:203] 
	W0903 23:50:30.304964  171911 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.305026  171911 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:50:30.305059  171911 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:50:30.306733  171911 out.go:203] 
	
	
	==> CRI-O <==
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.315162619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943973315142201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=110fb2db-29da-4912-bef3-0679d79630f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.315790353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=650f465f-8065-4a41-91af-d5b6cefc4a32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.315845918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=650f465f-8065-4a41-91af-d5b6cefc4a32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.315876373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=650f465f-8065-4a41-91af-d5b6cefc4a32 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.346827987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33e76698-020b-4373-a7ce-f7b36ff71d6f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.346910080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33e76698-020b-4373-a7ce-f7b36ff71d6f name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.348469378Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a17a2b86-9faf-4250-aa29-0314aabf79ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.348849178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943973348830669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a17a2b86-9faf-4250-aa29-0314aabf79ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.349323902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=436544ff-3127-4577-a514-254ef07540f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.349369005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=436544ff-3127-4577-a514-254ef07540f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.349407298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=436544ff-3127-4577-a514-254ef07540f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.380807439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31a994d6-81eb-4aed-a2e0-4b5746c3f6df name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.380927894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31a994d6-81eb-4aed-a2e0-4b5746c3f6df name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.382182644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a3f5add-09bd-460b-adae-78d1059ef958 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.382605047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943973382583409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a3f5add-09bd-460b-adae-78d1059ef958 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.383152700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c433b130-65a2-4266-98c3-0bcf289e66fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.383244226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c433b130-65a2-4266-98c3-0bcf289e66fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.383279436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c433b130-65a2-4266-98c3-0bcf289e66fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.413412424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f9e4659-eba9-42dd-8716-2c5868ff1ad7 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.413652087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f9e4659-eba9-42dd-8716-2c5868ff1ad7 name=/runtime.v1.RuntimeService/Version
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.414728767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d529d60f-cd70-4e9a-9d6f-1411e4642557 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.415100036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756943973415081354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d529d60f-cd70-4e9a-9d6f-1411e4642557 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.415942721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a98f3ddf-0a37-40ca-a59e-898d599ff088 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.416555227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a98f3ddf-0a37-40ca-a59e-898d599ff088 name=/runtime.v1.RuntimeService/ListContainers
	Sep 03 23:59:33 old-k8s-version-335468 crio[804]: time="2025-09-03 23:59:33.416618569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a98f3ddf-0a37-40ca-a59e-898d599ff088 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002453] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.031954] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108082] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.035422] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 3 23:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:59:33 up 17 min,  0 users,  load average: 0.02, 0.02, 0.02
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: goroutine 162 [chan receive]:
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0009e2750)
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: goroutine 163 [select]:
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bf9ef0, 0x4f0ac20, 0xc0009a0910, 0x1, 0xc0001000c0)
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009a42a0, 0xc0001000c0)
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000865ea0, 0xc0009f4700)
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 03 23:59:30 old-k8s-version-335468 kubelet[8049]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 03 23:59:30 old-k8s-version-335468 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 03 23:59:30 old-k8s-version-335468 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 03 23:59:31 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 03 23:59:31 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 03 23:59:31 old-k8s-version-335468 kubelet[8059]: I0903 23:59:31.122991    8059 server.go:416] Version: v1.20.0
	Sep 03 23:59:31 old-k8s-version-335468 kubelet[8059]: I0903 23:59:31.123320    8059 server.go:837] Client rotation is on, will bootstrap in background
	Sep 03 23:59:31 old-k8s-version-335468 kubelet[8059]: I0903 23:59:31.125315    8059 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 03 23:59:31 old-k8s-version-335468 kubelet[8059]: W0903 23:59:31.126806    8059 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 03 23:59:31 old-k8s-version-335468 kubelet[8059]: I0903 23:59:31.127496    8059 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (232.992246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (345.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0903 23:59:49.580211  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:00:06.323803  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:00:44.139993  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:00:58.234325  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:01:03.160744  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:02:01.592467  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/no-preload-434043/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:02:46.320599  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:02:48.518277  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/default-k8s-diff-port-799704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:03:37.764710  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:03:59.138851  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:04:12.838737  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:04:49.580783  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
E0904 00:05:06.323810  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.80:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.80:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (246.810836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-335468 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-335468 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.87µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-335468 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (234.5453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-335468 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p no-preload-434043                                                                                                                                                                                                                        │ no-preload-434043            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ embed-certs-088493 image list --format=json                                                                                                                                                                                                 │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p embed-certs-088493 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p embed-certs-088493                                                                                                                                                                                                                       │ embed-certs-088493           │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ image   │ default-k8s-diff-port-799704 image list --format=json                                                                                                                                                                                       │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ pause   │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ unpause │ -p default-k8s-diff-port-799704 --alsologtostderr -v=1                                                                                                                                                                                      │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ delete  │ -p default-k8s-diff-port-799704                                                                                                                                                                                                             │ default-k8s-diff-port-799704 │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ addons  │ enable metrics-server -p newest-cni-959437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:40 UTC │
	│ stop    │ -p newest-cni-959437 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:40 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-959437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ image   │ newest-cni-959437 image list --format=json                                                                                                                                                                                                  │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ pause   │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ unpause │ -p newest-cni-959437 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ delete  │ -p newest-cni-959437                                                                                                                                                                                                                        │ newest-cni-959437            │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ stop    │ -p old-k8s-version-335468 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-335468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │ 03 Sep 25 23:41 UTC │
	│ start   │ -p old-k8s-version-335468 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0 │ old-k8s-version-335468       │ jenkins │ v1.36.0 │ 03 Sep 25 23:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:41:58
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:41:58.777140  171911 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:41:58.777406  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777416  171911 out.go:374] Setting ErrFile to fd 2...
	I0903 23:41:58.777422  171911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:41:58.777607  171911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:41:58.778141  171911 out.go:368] Setting JSON to false
	I0903 23:41:58.779000  171911 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8663,"bootTime":1756934256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:41:58.779090  171911 start.go:140] virtualization: kvm guest
	I0903 23:41:58.781253  171911 out.go:179] * [old-k8s-version-335468] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:41:58.782571  171911 notify.go:220] Checking for updates...
	I0903 23:41:58.782584  171911 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:41:58.783694  171911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:41:58.784604  171911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:41:58.785686  171911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:41:58.786886  171911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:41:58.787874  171911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:41:58.789111  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:41:58.789531  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.789581  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.804713  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0903 23:41:58.805180  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.805760  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.805799  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.806176  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.806424  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.808193  171911 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0903 23:41:58.809451  171911 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:41:58.809758  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.809795  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.825067  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0903 23:41:58.825609  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.826091  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.826116  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.826506  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.826651  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.862143  171911 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 23:41:58.863156  171911 start.go:304] selected driver: kvm2
	I0903 23:41:58.863168  171911 start.go:918] validating driver "kvm2" against &{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.863278  171911 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:41:58.863960  171911 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.864040  171911 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 23:41:58.879770  171911 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 23:41:58.880346  171911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:41:58.880393  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:41:58.880445  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:41:58.880503  171911 start.go:348] cluster config:
	{Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:41:58.880659  171911 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:41:58.882387  171911 out.go:179] * Starting "old-k8s-version-335468" primary control-plane node in "old-k8s-version-335468" cluster
	I0903 23:41:58.883545  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:41:58.883582  171911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 23:41:58.883591  171911 cache.go:58] Caching tarball of preloaded images
	I0903 23:41:58.883679  171911 preload.go:172] Found /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0903 23:41:58.883689  171911 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 23:41:58.883774  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:41:58.883966  171911 start.go:360] acquireMachinesLock for old-k8s-version-335468: {Name:mkcbe368d68a51a2a3c0eadc653c4df7d9736b4b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0903 23:41:58.884013  171911 start.go:364] duration metric: took 27.848µs to acquireMachinesLock for "old-k8s-version-335468"
	I0903 23:41:58.884027  171911 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:41:58.884034  171911 fix.go:54] fixHost starting: 
	I0903 23:41:58.884290  171911 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:41:58.884339  171911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:41:58.899629  171911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0903 23:41:58.900295  171911 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:41:58.901063  171911 main.go:141] libmachine: Using API Version  1
	I0903 23:41:58.901090  171911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:41:58.901496  171911 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:41:58.901698  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:41:58.901857  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetState
	I0903 23:41:58.903463  171911 fix.go:112] recreateIfNeeded on old-k8s-version-335468: state=Stopped err=<nil>
	I0903 23:41:58.903488  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	W0903 23:41:58.903630  171911 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:41:58.905426  171911 out.go:252] * Restarting existing kvm2 VM for "old-k8s-version-335468" ...
	I0903 23:41:58.905455  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .Start
	I0903 23:41:58.905612  171911 main.go:141] libmachine: (old-k8s-version-335468) starting domain...
	I0903 23:41:58.905634  171911 main.go:141] libmachine: (old-k8s-version-335468) ensuring networks are active...
	I0903 23:41:58.906424  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network default is active
	I0903 23:41:58.906730  171911 main.go:141] libmachine: (old-k8s-version-335468) Ensuring network mk-old-k8s-version-335468 is active
	I0903 23:41:58.907059  171911 main.go:141] libmachine: (old-k8s-version-335468) getting domain XML...
	I0903 23:41:58.907800  171911 main.go:141] libmachine: (old-k8s-version-335468) creating domain...
	I0903 23:42:00.140356  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for IP...
	I0903 23:42:00.141202  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.141582  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.141709  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.141590  171947 retry.go:31] will retry after 276.832755ms: waiting for domain to come up
	I0903 23:42:00.420407  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.420855  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.420917  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.420836  171947 retry.go:31] will retry after 314.668622ms: waiting for domain to come up
	I0903 23:42:00.737468  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:00.737871  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:00.737901  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:00.737828  171947 retry.go:31] will retry after 345.8826ms: waiting for domain to come up
	I0903 23:42:01.085701  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.086185  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.086217  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.086168  171947 retry.go:31] will retry after 426.296812ms: waiting for domain to come up
	I0903 23:42:01.513991  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:01.514453  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:01.514482  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:01.514426  171947 retry.go:31] will retry after 602.972692ms: waiting for domain to come up
	I0903 23:42:02.119438  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.119856  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.119885  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.119827  171947 retry.go:31] will retry after 798.351499ms: waiting for domain to come up
	I0903 23:42:02.919839  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:02.920276  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:02.920307  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:02.920220  171947 retry.go:31] will retry after 1.022190105s: waiting for domain to come up
	I0903 23:42:03.944354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:03.944807  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:03.944840  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:03.944747  171947 retry.go:31] will retry after 1.29364095s: waiting for domain to come up
	I0903 23:42:05.240165  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:05.240547  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:05.240578  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:05.240525  171947 retry.go:31] will retry after 1.368503788s: waiting for domain to come up
	I0903 23:42:06.611109  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:06.611618  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:06.611652  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:06.611578  171947 retry.go:31] will retry after 2.084047059s: waiting for domain to come up
	I0903 23:42:08.698604  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:08.699065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:08.699089  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:08.699048  171947 retry.go:31] will retry after 2.491740737s: waiting for domain to come up
	I0903 23:42:11.193535  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:11.194024  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:11.194066  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:11.194000  171947 retry.go:31] will retry after 2.442590545s: waiting for domain to come up
	I0903 23:42:13.638462  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:13.638791  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | unable to find current IP address of domain old-k8s-version-335468 in network mk-old-k8s-version-335468
	I0903 23:42:13.638812  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | I0903 23:42:13.638754  171947 retry.go:31] will retry after 4.493184117s: waiting for domain to come up
	I0903 23:42:18.134025  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134463  171911 main.go:141] libmachine: (old-k8s-version-335468) found domain IP: 192.168.61.80
	I0903 23:42:18.134496  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has current primary IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.134511  171911 main.go:141] libmachine: (old-k8s-version-335468) reserving static IP address...
	I0903 23:42:18.134886  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.134919  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | skip adding static IP to network mk-old-k8s-version-335468 - found existing host DHCP lease matching {name: "old-k8s-version-335468", mac: "52:54:00:a2:6b:b9", ip: "192.168.61.80"}
	I0903 23:42:18.134935  171911 main.go:141] libmachine: (old-k8s-version-335468) reserved static IP address 192.168.61.80 for domain old-k8s-version-335468
	I0903 23:42:18.134949  171911 main.go:141] libmachine: (old-k8s-version-335468) waiting for SSH...
	I0903 23:42:18.134965  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Getting to WaitForSSH function...
	I0903 23:42:18.137067  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137412  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.137435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.137591  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH client type: external
	I0903 23:42:18.137615  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | Using SSH private key: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa (-rw-------)
	I0903 23:42:18.137661  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0903 23:42:18.137678  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | About to run SSH command:
	I0903 23:42:18.137689  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | exit 0
	I0903 23:42:18.265417  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | SSH cmd err, output: <nil>: 
	I0903 23:42:18.265809  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetConfigRaw
	I0903 23:42:18.266396  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.269013  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269322  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.269352  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.269559  171911 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/config.json ...
	I0903 23:42:18.269795  171911 machine.go:93] provisionDockerMachine start ...
	I0903 23:42:18.269824  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:18.270044  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.272246  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272543  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.272584  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.272665  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.272846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.272997  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.273116  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.273294  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.273564  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.273578  171911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:42:18.389858  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0903 23:42:18.389891  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390184  171911 buildroot.go:166] provisioning hostname "old-k8s-version-335468"
	I0903 23:42:18.390213  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.390400  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.393065  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393474  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.393508  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.393629  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.393787  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.393963  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.394113  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.394288  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.394494  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.394507  171911 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-335468 && echo "old-k8s-version-335468" | sudo tee /etc/hostname
	I0903 23:42:18.526146  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-335468
	
	I0903 23:42:18.526174  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.528979  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529317  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.529341  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.529521  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.529715  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.529887  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.530039  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.530198  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:18.530443  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:18.530462  171911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-335468' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-335468/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-335468' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:42:18.655502  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:42:18.655540  171911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21341-109162/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-109162/.minikube}
	I0903 23:42:18.655578  171911 buildroot.go:174] setting up certificates
	I0903 23:42:18.655591  171911 provision.go:84] configureAuth start
	I0903 23:42:18.655604  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetMachineName
	I0903 23:42:18.655930  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:18.658889  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659364  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.659393  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.659574  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.661700  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.661987  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.662012  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.662134  171911 provision.go:143] copyHostCerts
	I0903 23:42:18.662197  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem, removing ...
	I0903 23:42:18.662222  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem
	I0903 23:42:18.662298  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/ca.pem (1078 bytes)
	I0903 23:42:18.662418  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem, removing ...
	I0903 23:42:18.662431  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem
	I0903 23:42:18.662468  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/cert.pem (1123 bytes)
	I0903 23:42:18.662563  171911 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem, removing ...
	I0903 23:42:18.662573  171911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem
	I0903 23:42:18.662606  171911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-109162/.minikube/key.pem (1675 bytes)
	I0903 23:42:18.662675  171911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-335468 san=[127.0.0.1 192.168.61.80 localhost minikube old-k8s-version-335468]
	I0903 23:42:18.981415  171911 provision.go:177] copyRemoteCerts
	I0903 23:42:18.981472  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:42:18.981497  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:18.983969  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984256  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:18.984285  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:18.984430  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:18.984657  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:18.984813  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:18.984946  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.073026  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0903 23:42:19.100256  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0903 23:42:19.127225  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:42:19.154111  171911 provision.go:87] duration metric: took 498.506096ms to configureAuth
	I0903 23:42:19.154138  171911 buildroot.go:189] setting minikube options for container-runtime
	I0903 23:42:19.154358  171911 config.go:182] Loaded profile config "old-k8s-version-335468": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:42:19.154450  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.157159  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157588  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.157613  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.157774  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.157993  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158192  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.158345  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.158511  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.158713  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.158727  171911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:42:19.403450  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:42:19.403503  171911 machine.go:96] duration metric: took 1.133688609s to provisionDockerMachine
	I0903 23:42:19.403516  171911 start.go:293] postStartSetup for "old-k8s-version-335468" (driver="kvm2")
	I0903 23:42:19.403546  171911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:42:19.403575  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.403961  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:42:19.403992  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.406435  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406792  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.406820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.406954  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.407146  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.407310  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.407431  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.498010  171911 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:42:19.502446  171911 info.go:137] Remote host: Buildroot 2025.02
	I0903 23:42:19.502472  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/addons for local assets ...
	I0903 23:42:19.502533  171911 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-109162/.minikube/files for local assets ...
	I0903 23:42:19.502606  171911 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem -> 1132882.pem in /etc/ssl/certs
	I0903 23:42:19.502691  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0903 23:42:19.513148  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:19.539923  171911 start.go:296] duration metric: took 136.378767ms for postStartSetup
	I0903 23:42:19.539966  171911 fix.go:56] duration metric: took 20.655932447s for fixHost
	I0903 23:42:19.539987  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.542771  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543135  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.543163  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.543432  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.543661  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.543924  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.544083  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.544239  171911 main.go:141] libmachine: Using SSH client type: native
	I0903 23:42:19.544450  171911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83a420] 0x83d120 <nil>  [] 0s} 192.168.61.80 22 <nil> <nil>}
	I0903 23:42:19.544464  171911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0903 23:42:19.658283  171911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1756942939.619184337
	
	I0903 23:42:19.658310  171911 fix.go:216] guest clock: 1756942939.619184337
	I0903 23:42:19.658320  171911 fix.go:229] Guest: 2025-09-03 23:42:19.619184337 +0000 UTC Remote: 2025-09-03 23:42:19.539969783 +0000 UTC m=+20.799287975 (delta=79.214554ms)
	I0903 23:42:19.658340  171911 fix.go:200] guest clock delta is within tolerance: 79.214554ms
	I0903 23:42:19.658346  171911 start.go:83] releasing machines lock for "old-k8s-version-335468", held for 20.774323746s
	I0903 23:42:19.658367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.658686  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:19.661465  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.661820  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.661848  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.662028  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662525  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662702  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .DriverName
	I0903 23:42:19.662785  171911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:42:19.662846  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.662927  171911 ssh_runner.go:195] Run: cat /version.json
	I0903 23:42:19.662943  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHHostname
	I0903 23:42:19.665354  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665683  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665718  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.665740  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.665938  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666142  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:19.666154  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666167  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:19.666342  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666367  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHPort
	I0903 23:42:19.666528  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHKeyPath
	I0903 23:42:19.666520  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.666673  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetSSHUsername
	I0903 23:42:19.666795  171911 sshutil.go:53] new ssh client: &{IP:192.168.61.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/old-k8s-version-335468/id_rsa Username:docker}
	I0903 23:42:19.778070  171911 ssh_runner.go:195] Run: systemctl --version
	I0903 23:42:19.783809  171911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:42:19.925729  171911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0903 23:42:19.931814  171911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0903 23:42:19.931870  171911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:42:19.950008  171911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0903 23:42:19.950038  171911 start.go:495] detecting cgroup driver to use...
	I0903 23:42:19.950104  171911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:42:19.969078  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:42:19.984800  171911 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:42:19.984862  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:42:19.999909  171911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:42:20.014636  171911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:42:20.158742  171911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:42:20.297981  171911 docker.go:234] disabling docker service ...
	I0903 23:42:20.298074  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:42:20.314384  171911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:42:20.327885  171911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:42:20.530158  171911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:42:20.665612  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:42:20.680150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:42:20.700792  171911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0903 23:42:20.700857  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.712182  171911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:42:20.712258  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.723777  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.734863  171911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:42:20.746438  171911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:42:20.759910  171911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:42:20.769436  171911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0903 23:42:20.769493  171911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0903 23:42:20.788756  171911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:42:20.799437  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:20.954989  171911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:42:21.072550  171911 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:42:21.072649  171911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:42:21.077536  171911 start.go:563] Will wait 60s for crictl version
	I0903 23:42:21.077592  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:21.081093  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:42:21.119015  171911 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0903 23:42:21.119097  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.146341  171911 ssh_runner.go:195] Run: crio --version
	I0903 23:42:21.176700  171911 out.go:179] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0903 23:42:21.177731  171911 main.go:141] libmachine: (old-k8s-version-335468) Calling .GetIP
	I0903 23:42:21.180269  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180568  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:6b:b9", ip: ""} in network mk-old-k8s-version-335468: {Iface:virbr3 ExpiryTime:2025-09-04 00:42:10 +0000 UTC Type:0 Mac:52:54:00:a2:6b:b9 Iaid: IPaddr:192.168.61.80 Prefix:24 Hostname:old-k8s-version-335468 Clientid:01:52:54:00:a2:6b:b9}
	I0903 23:42:21.180599  171911 main.go:141] libmachine: (old-k8s-version-335468) DBG | domain old-k8s-version-335468 has defined IP address 192.168.61.80 and MAC address 52:54:00:a2:6b:b9 in network mk-old-k8s-version-335468
	I0903 23:42:21.180856  171911 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0903 23:42:21.185094  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:21.198784  171911 kubeadm.go:875] updating cluster {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStri
ng: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:42:21.198887  171911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:42:21.198930  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:21.245403  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:21.245474  171911 ssh_runner.go:195] Run: which lz4
	I0903 23:42:21.249531  171911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0903 23:42:21.253934  171911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0903 23:42:21.253970  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0903 23:42:22.735338  171911 crio.go:462] duration metric: took 1.48583725s to copy over tarball
	I0903 23:42:22.735409  171911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0903 23:42:24.901192  171911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.165749867s)
	I0903 23:42:24.901224  171911 crio.go:469] duration metric: took 2.165856963s to extract the tarball
	I0903 23:42:24.901234  171911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0903 23:42:24.945210  171911 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:42:24.977983  171911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0903 23:42:24.978011  171911 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0903 23:42:24.978093  171911 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:24.978095  171911 image.go:138] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.978122  171911 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.978134  171911 image.go:138] retrieving image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.978092  171911 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.978167  171911 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.978180  171911 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.978151  171911 image.go:138] retrieving image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979632  171911 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:24.979647  171911 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:24.979664  171911 image.go:181] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:24.979669  171911 image.go:181] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:24.979651  171911 image.go:181] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0903 23:42:24.979683  171911 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:24.979708  171911 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:24.979715  171911 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:25.139789  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.149556  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.153427  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.156447  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.166085  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.178841  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.180227  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0903 23:42:25.223305  171911 cache_images.go:117] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0903 23:42:25.223359  171911 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.223398  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.287785  171911 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0903 23:42:25.287834  171911 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.287879  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303285  171911 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0903 23:42:25.303336  171911 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.303345  171911 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0903 23:42:25.303383  171911 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.303392  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.303431  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311751  171911 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0903 23:42:25.311798  171911 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.311803  171911 cache_images.go:117] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0903 23:42:25.311842  171911 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.311855  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.311888  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324120  171911 cache_images.go:117] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0903 23:42:25.324164  171911 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0903 23:42:25.324187  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.324202  171911 ssh_runner.go:195] Run: which crictl
	I0903 23:42:25.324241  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.324655  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.324678  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.324906  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.325033  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.422314  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.422412  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.436779  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.479512  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.482280  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.482370  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.482417  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.528977  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.529015  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0903 23:42:25.566801  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0903 23:42:25.639744  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0903 23:42:25.639814  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0903 23:42:25.639829  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0903 23:42:25.680104  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0903 23:42:25.680249  171911 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0903 23:42:25.680257  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0903 23:42:25.724922  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0903 23:42:25.747501  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0903 23:42:25.747577  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0903 23:42:25.751768  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0903 23:42:25.760936  171911 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0903 23:42:26.285671  171911 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:42:26.426376  171911 cache_images.go:93] duration metric: took 1.448344647s to LoadCachedImages
	W0903 23:42:26.426480  171911 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21341-109162/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0903 23:42:26.426499  171911 kubeadm.go:926] updating node { 192.168.61.80 8443 v1.20.0 crio true true} ...
	I0903 23:42:26.426618  171911 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-335468 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:42:26.426702  171911 ssh_runner.go:195] Run: crio config
	I0903 23:42:26.476895  171911 cni.go:84] Creating CNI manager for ""
	I0903 23:42:26.476919  171911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 23:42:26.476933  171911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:42:26.476956  171911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.80 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-335468 NodeName:old-k8s-version-335468 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0903 23:42:26.477114  171911 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-335468"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:42:26.477233  171911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0903 23:42:26.490694  171911 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:42:26.490775  171911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:42:26.501798  171911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0903 23:42:26.520806  171911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:42:26.539068  171911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0903 23:42:26.558168  171911 ssh_runner.go:195] Run: grep 192.168.61.80	control-plane.minikube.internal$ /etc/hosts
	I0903 23:42:26.562134  171911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:42:26.575449  171911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:42:26.711961  171911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:42:26.759354  171911 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468 for IP: 192.168.61.80
	I0903 23:42:26.759380  171911 certs.go:194] generating shared ca certs ...
	I0903 23:42:26.759407  171911 certs.go:226] acquiring lock for ca certs: {Name:mk06ceac58fd506484973632a7bd0b701183c2c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:26.759577  171911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key
	I0903 23:42:26.759632  171911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key
	I0903 23:42:26.759646  171911 certs.go:256] generating profile certs ...
	I0903 23:42:26.759743  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/client.key
	I0903 23:42:26.759820  171911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key.f2828629
	I0903 23:42:26.759878  171911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key
	I0903 23:42:26.760013  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem (1338 bytes)
	W0903 23:42:26.760052  171911 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288_empty.pem, impossibly tiny 0 bytes
	I0903 23:42:26.760066  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:42:26.760099  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/ca.pem (1078 bytes)
	I0903 23:42:26.760133  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:42:26.760167  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/certs/key.pem (1675 bytes)
	I0903 23:42:26.760220  171911 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem (1708 bytes)
	I0903 23:42:26.760811  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:42:26.791932  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:42:26.824575  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:42:26.853358  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0903 23:42:26.887411  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0903 23:42:26.914421  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0903 23:42:26.940984  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:42:26.968279  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/old-k8s-version-335468/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:42:26.995059  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/certs/113288.pem --> /usr/share/ca-certificates/113288.pem (1338 bytes)
	I0903 23:42:27.023211  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/ssl/certs/1132882.pem --> /usr/share/ca-certificates/1132882.pem (1708 bytes)
	I0903 23:42:27.049929  171911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-109162/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:42:27.076578  171911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:42:27.095209  171911 ssh_runner.go:195] Run: openssl version
	I0903 23:42:27.100879  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:42:27.112933  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118040  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 22:28 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.118090  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:42:27.125341  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:42:27.140002  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/113288.pem && ln -fs /usr/share/ca-certificates/113288.pem /etc/ssl/certs/113288.pem"
	I0903 23:42:27.154488  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159574  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 22:36 /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.159635  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/113288.pem
	I0903 23:42:27.166580  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/113288.pem /etc/ssl/certs/51391683.0"
	I0903 23:42:27.180666  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1132882.pem && ln -fs /usr/share/ca-certificates/1132882.pem /etc/ssl/certs/1132882.pem"
	I0903 23:42:27.194853  171911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199793  171911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 22:36 /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.199841  171911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1132882.pem
	I0903 23:42:27.206851  171911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1132882.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:42:27.221163  171911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:42:27.226347  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:42:27.233982  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:42:27.241290  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:42:27.248464  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:42:27.255916  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:42:27.263308  171911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:42:27.270533  171911 kubeadm.go:392] StartCluster: {Name:old-k8s-version-335468 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-335468 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.80 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:42:27.270648  171911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:42:27.270739  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.306525  171911 cri.go:89] found id: ""
	I0903 23:42:27.306598  171911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:42:27.318570  171911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0903 23:42:27.318592  171911 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0903 23:42:27.318639  171911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0903 23:42:27.329789  171911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:42:27.330196  171911 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-335468" does not appear in /home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:42:27.330362  171911 kubeconfig.go:62] /home/jenkins/minikube-integration/21341-109162/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-335468" cluster setting kubeconfig missing "old-k8s-version-335468" context setting]
	I0903 23:42:27.330702  171911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/kubeconfig: {Name:mkb8e4377749caffe9aea4452a6b5de9f0dc7427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:42:27.374758  171911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0903 23:42:27.386214  171911 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.61.80
	I0903 23:42:27.386258  171911 kubeadm.go:1152] stopping kube-system containers ...
	I0903 23:42:27.386272  171911 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0903 23:42:27.386331  171911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:42:27.425149  171911 cri.go:89] found id: ""
	I0903 23:42:27.425215  171911 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0903 23:42:27.445596  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:42:27.456478  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:42:27.456499  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:42:27.456562  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:42:27.466434  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:42:27.466490  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:42:27.477542  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:42:27.487494  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:42:27.487556  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:42:27.498329  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.508036  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:42:27.508096  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:42:27.521941  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:42:27.531852  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:42:27.531907  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:42:27.542155  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:42:27.553239  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:27.633226  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.602124  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.854495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:28.947073  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0903 23:42:29.027974  171911 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:42:29.028070  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:29.528786  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.029080  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:30.529093  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.029115  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:31.528486  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.029181  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:32.528450  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.028477  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:33.529071  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.028981  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:34.528195  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.028453  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:35.528706  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.028199  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:36.528759  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:37.528169  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.028416  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:38.528882  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.028560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:39.528880  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.029029  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:40.528664  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.028784  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:41.528383  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.028492  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:42.528853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.028647  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:43.528940  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.028219  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:44.528661  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.029081  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:45.528521  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.028610  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:46.529168  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.028585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:47.528452  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.028847  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:48.528533  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.028538  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:49.529012  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.029175  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:50.528266  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.028443  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:51.528936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.028174  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:52.528782  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.028946  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:53.529016  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.029217  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:54.528827  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.028743  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:55.528564  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.029013  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:56.528850  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.028379  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:57.528543  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.028863  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:58.528547  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.028618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:42:59.528316  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.028825  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:00.528728  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.028929  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:01.528618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.028774  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:02.528830  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.028902  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:03.528997  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.028460  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:04.529085  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.028814  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:05.528240  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.028382  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:06.528648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.028776  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:07.528630  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.028650  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:08.528498  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.028874  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:09.529055  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.028335  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:10.528817  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.029166  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:11.528517  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.028284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:12.528580  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.028324  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:13.528516  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.028872  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:14.529100  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.029032  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:15.528427  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.028297  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:16.528182  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.028871  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:17.528931  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.028363  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:18.528960  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.028522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:19.528560  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.028879  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:20.528155  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.028536  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:21.528372  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.028985  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:22.529094  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.028627  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:23.529025  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.028457  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:24.528968  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.028323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:25.528323  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.028859  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:26.528886  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.028648  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:27.528292  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.028496  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:28.528556  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:29.028482  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:29.028567  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:29.065203  171911 cri.go:89] found id: ""
	I0903 23:43:29.065238  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.065249  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:29.065257  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:29.065323  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:29.099969  171911 cri.go:89] found id: ""
	I0903 23:43:29.100008  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.100020  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:29.100030  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:29.100100  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:29.134038  171911 cri.go:89] found id: ""
	I0903 23:43:29.134075  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.134088  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:29.134096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:29.134166  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:29.167976  171911 cri.go:89] found id: ""
	I0903 23:43:29.168009  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.168018  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:29.168025  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:29.168081  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:29.203375  171911 cri.go:89] found id: ""
	I0903 23:43:29.203406  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.203414  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:29.203420  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:29.203487  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:29.237316  171911 cri.go:89] found id: ""
	I0903 23:43:29.237347  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.237358  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:29.237366  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:29.237456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:29.271010  171911 cri.go:89] found id: ""
	I0903 23:43:29.271036  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.271044  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:29.271051  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:29.271115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:29.305355  171911 cri.go:89] found id: ""
	I0903 23:43:29.305398  171911 logs.go:282] 0 containers: []
	W0903 23:43:29.305410  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:29.305424  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:29.305450  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:29.343610  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:29.343647  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:29.390474  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:29.390513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:29.404227  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:29.404255  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:29.473354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:29.473377  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:29.473409  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.045578  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:32.064442  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:32.064510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:32.104125  171911 cri.go:89] found id: ""
	I0903 23:43:32.104153  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.104162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:32.104167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:32.104219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:32.140304  171911 cri.go:89] found id: ""
	I0903 23:43:32.140344  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.140357  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:32.140366  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:32.140436  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:32.174194  171911 cri.go:89] found id: ""
	I0903 23:43:32.174227  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.174241  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:32.174249  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:32.174322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:32.207732  171911 cri.go:89] found id: ""
	I0903 23:43:32.207760  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.207768  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:32.207775  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:32.207828  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:32.242885  171911 cri.go:89] found id: ""
	I0903 23:43:32.242919  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.242927  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:32.242934  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:32.242991  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:32.276911  171911 cri.go:89] found id: ""
	I0903 23:43:32.276938  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.276945  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:32.276952  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:32.277004  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:32.310660  171911 cri.go:89] found id: ""
	I0903 23:43:32.310689  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.310697  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:32.310703  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:32.310753  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:32.344285  171911 cri.go:89] found id: ""
	I0903 23:43:32.344316  171911 logs.go:282] 0 containers: []
	W0903 23:43:32.344327  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:32.344341  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:32.344357  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:32.394031  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:32.394079  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:32.408165  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:32.408199  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:32.473250  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:32.473279  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:32.473293  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:32.556677  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:32.556722  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.104790  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:35.121004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:35.121069  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:35.153087  171911 cri.go:89] found id: ""
	I0903 23:43:35.153118  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.153126  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:35.153133  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:35.153187  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:35.185837  171911 cri.go:89] found id: ""
	I0903 23:43:35.185877  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.185885  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:35.185891  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:35.185947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:35.219367  171911 cri.go:89] found id: ""
	I0903 23:43:35.219410  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.219421  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:35.219430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:35.219491  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:35.253170  171911 cri.go:89] found id: ""
	I0903 23:43:35.253204  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.253218  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:35.253239  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:35.253325  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:35.285565  171911 cri.go:89] found id: ""
	I0903 23:43:35.285599  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.285611  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:35.285620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:35.285688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:35.319446  171911 cri.go:89] found id: ""
	I0903 23:43:35.319476  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.319484  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:35.319490  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:35.319541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:35.354359  171911 cri.go:89] found id: ""
	I0903 23:43:35.354387  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.354394  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:35.354400  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:35.354452  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:35.390780  171911 cri.go:89] found id: ""
	I0903 23:43:35.390815  171911 logs.go:282] 0 containers: []
	W0903 23:43:35.390825  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:35.390837  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:35.390852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:35.465751  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:35.465790  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:35.504480  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:35.504517  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:35.554283  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:35.554318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:35.567404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:35.567436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:35.629663  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.130296  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:38.146915  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:38.147003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:38.179729  171911 cri.go:89] found id: ""
	I0903 23:43:38.179768  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.179781  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:38.179791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:38.179863  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:38.212185  171911 cri.go:89] found id: ""
	I0903 23:43:38.212215  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.212227  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:38.212235  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:38.212322  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:38.245927  171911 cri.go:89] found id: ""
	I0903 23:43:38.245953  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.245960  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:38.245966  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:38.246027  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:38.280868  171911 cri.go:89] found id: ""
	I0903 23:43:38.280900  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.280911  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:38.280918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:38.281003  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:38.321240  171911 cri.go:89] found id: ""
	I0903 23:43:38.321275  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.321288  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:38.321298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:38.321407  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:38.375140  171911 cri.go:89] found id: ""
	I0903 23:43:38.375169  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.375183  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:38.375191  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:38.375277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:38.418890  171911 cri.go:89] found id: ""
	I0903 23:43:38.418928  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.418940  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:38.418950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:38.419019  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:38.452908  171911 cri.go:89] found id: ""
	I0903 23:43:38.452938  171911 logs.go:282] 0 containers: []
	W0903 23:43:38.452949  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:38.452962  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:38.452978  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:38.503416  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:38.503460  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:38.517203  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:38.517233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:38.580070  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:38.580096  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:38.580110  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:38.652380  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:38.652420  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.192031  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:41.208483  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:41.208546  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:41.241854  171911 cri.go:89] found id: ""
	I0903 23:43:41.241880  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.241887  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:41.241895  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:41.241953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:41.276043  171911 cri.go:89] found id: ""
	I0903 23:43:41.276070  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.276078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:41.276084  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:41.276136  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:41.312473  171911 cri.go:89] found id: ""
	I0903 23:43:41.312503  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.312514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:41.312522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:41.312591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:41.345515  171911 cri.go:89] found id: ""
	I0903 23:43:41.345543  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.345551  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:41.345558  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:41.345630  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:41.378505  171911 cri.go:89] found id: ""
	I0903 23:43:41.378539  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.378547  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:41.378554  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:41.378613  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:41.414245  171911 cri.go:89] found id: ""
	I0903 23:43:41.414276  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.414284  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:41.414290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:41.414351  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:41.450931  171911 cri.go:89] found id: ""
	I0903 23:43:41.450969  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.450981  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:41.451050  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:41.451126  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:41.484869  171911 cri.go:89] found id: ""
	I0903 23:43:41.484898  171911 logs.go:282] 0 containers: []
	W0903 23:43:41.484906  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:41.484916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:41.484934  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:41.498189  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:41.498219  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:41.560558  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:41.560583  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:41.560601  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:41.637195  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:41.637235  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:41.675448  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:41.675478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.223401  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:44.253341  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:44.253423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:44.300478  171911 cri.go:89] found id: ""
	I0903 23:43:44.300512  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.300523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:44.300531  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:44.300625  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:44.342127  171911 cri.go:89] found id: ""
	I0903 23:43:44.342158  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.342166  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:44.342178  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:44.342242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:44.392479  171911 cri.go:89] found id: ""
	I0903 23:43:44.392505  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.392514  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:44.392522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:44.392587  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:44.428584  171911 cri.go:89] found id: ""
	I0903 23:43:44.428627  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.428646  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:44.428655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:44.428724  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:44.463165  171911 cri.go:89] found id: ""
	I0903 23:43:44.463196  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.463205  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:44.463214  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:44.463276  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:44.497562  171911 cri.go:89] found id: ""
	I0903 23:43:44.497599  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.497606  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:44.497616  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:44.497671  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:44.532319  171911 cri.go:89] found id: ""
	I0903 23:43:44.532349  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.532356  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:44.532371  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:44.532431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:44.567181  171911 cri.go:89] found id: ""
	I0903 23:43:44.567214  171911 logs.go:282] 0 containers: []
	W0903 23:43:44.567229  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:44.567242  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:44.567259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:44.647186  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:44.647237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:44.684779  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:44.684815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:44.734346  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:44.734384  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:44.748304  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:44.748333  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:44.811995  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.313737  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:47.330976  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:47.331047  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:47.365152  171911 cri.go:89] found id: ""
	I0903 23:43:47.365183  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.365191  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:47.365198  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:47.365250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:47.402002  171911 cri.go:89] found id: ""
	I0903 23:43:47.402034  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.402042  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:47.402048  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:47.402103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:47.439574  171911 cri.go:89] found id: ""
	I0903 23:43:47.439611  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.439619  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:47.439626  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:47.439694  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:47.474877  171911 cri.go:89] found id: ""
	I0903 23:43:47.474910  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.474918  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:47.474925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:47.474980  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:47.511850  171911 cri.go:89] found id: ""
	I0903 23:43:47.511882  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.511889  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:47.511896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:47.511952  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:47.545975  171911 cri.go:89] found id: ""
	I0903 23:43:47.546011  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.546022  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:47.546032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:47.546091  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:47.581967  171911 cri.go:89] found id: ""
	I0903 23:43:47.581996  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.582004  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:47.582010  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:47.582079  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:47.617442  171911 cri.go:89] found id: ""
	I0903 23:43:47.617470  171911 logs.go:282] 0 containers: []
	W0903 23:43:47.617478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:47.617487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:47.617499  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:47.655119  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:47.655150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:47.702001  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:47.702035  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:47.715671  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:47.715701  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:47.781271  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:47.781297  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:47.781310  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.353562  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:50.370200  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:50.370271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:50.404593  171911 cri.go:89] found id: ""
	I0903 23:43:50.404621  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.404631  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:50.404640  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:50.404714  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:50.438454  171911 cri.go:89] found id: ""
	I0903 23:43:50.438482  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.438491  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:50.438498  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:50.438609  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:50.474138  171911 cri.go:89] found id: ""
	I0903 23:43:50.474165  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.474176  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:50.474184  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:50.474247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:50.506277  171911 cri.go:89] found id: ""
	I0903 23:43:50.506308  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.506319  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:50.506328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:50.506398  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:50.540877  171911 cri.go:89] found id: ""
	I0903 23:43:50.540905  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.540912  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:50.540918  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:50.540969  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:50.574490  171911 cri.go:89] found id: ""
	I0903 23:43:50.574548  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.574567  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:50.574578  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:50.574654  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:50.608197  171911 cri.go:89] found id: ""
	I0903 23:43:50.608225  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.608233  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:50.608238  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:50.608288  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:50.641053  171911 cri.go:89] found id: ""
	I0903 23:43:50.641082  171911 logs.go:282] 0 containers: []
	W0903 23:43:50.641089  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:50.641099  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:50.641109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:50.712696  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:50.712742  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:50.749969  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:50.750001  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:50.800039  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:50.800074  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:50.813705  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:50.813736  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:50.876873  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.378585  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:53.395927  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:53.395997  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:53.429784  171911 cri.go:89] found id: ""
	I0903 23:43:53.429814  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.429821  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:53.429827  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:53.429880  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:53.463718  171911 cri.go:89] found id: ""
	I0903 23:43:53.463745  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.463753  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:53.463759  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:53.463815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:53.499017  171911 cri.go:89] found id: ""
	I0903 23:43:53.499046  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.499056  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:53.499065  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:53.499132  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:53.534239  171911 cri.go:89] found id: ""
	I0903 23:43:53.534273  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.534283  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:53.534290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:53.534353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:53.567405  171911 cri.go:89] found id: ""
	I0903 23:43:53.567431  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.567438  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:53.567445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:53.567500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:53.603686  171911 cri.go:89] found id: ""
	I0903 23:43:53.603722  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.603733  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:53.603742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:53.603805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:53.638591  171911 cri.go:89] found id: ""
	I0903 23:43:53.638618  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.638627  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:53.638635  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:53.638698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:53.672243  171911 cri.go:89] found id: ""
	I0903 23:43:53.672288  171911 logs.go:282] 0 containers: []
	W0903 23:43:53.672296  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:53.672305  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:53.672318  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:53.721410  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:53.721448  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:53.735356  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:53.735386  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:53.797966  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:53.797988  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:53.798005  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:53.872491  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:53.872529  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.410853  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:56.427796  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:56.427871  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:56.460023  171911 cri.go:89] found id: ""
	I0903 23:43:56.460066  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.460077  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:56.460085  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:56.460160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:56.494386  171911 cri.go:89] found id: ""
	I0903 23:43:56.494414  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.494424  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:56.494432  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:56.494492  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:56.529298  171911 cri.go:89] found id: ""
	I0903 23:43:56.529329  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.529339  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:56.529356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:56.529433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:56.562775  171911 cri.go:89] found id: ""
	I0903 23:43:56.562818  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.562830  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:56.562837  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:56.562898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:56.604698  171911 cri.go:89] found id: ""
	I0903 23:43:56.604739  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.604751  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:56.604758  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:56.604811  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:56.644278  171911 cri.go:89] found id: ""
	I0903 23:43:56.644307  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.644319  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:56.644328  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:56.644397  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:56.686334  171911 cri.go:89] found id: ""
	I0903 23:43:56.686366  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.686377  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:56.686385  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:56.686458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:56.725441  171911 cri.go:89] found id: ""
	I0903 23:43:56.725466  171911 logs.go:282] 0 containers: []
	W0903 23:43:56.725486  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:56.725494  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:56.725508  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:56.791969  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:56.792002  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:56.792021  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:56.866297  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:56.866338  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:56.904335  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:56.904372  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:56.952822  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:56.952863  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:43:59.466793  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:43:59.484556  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:43:59.484633  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:43:59.521818  171911 cri.go:89] found id: ""
	I0903 23:43:59.521848  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.521860  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:43:59.521868  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:43:59.521945  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:43:59.556474  171911 cri.go:89] found id: ""
	I0903 23:43:59.556501  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.556509  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:43:59.556515  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:43:59.556569  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:43:59.591410  171911 cri.go:89] found id: ""
	I0903 23:43:59.591440  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.591447  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:43:59.591453  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:43:59.591503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:43:59.625559  171911 cri.go:89] found id: ""
	I0903 23:43:59.625587  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.625593  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:43:59.625615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:43:59.625668  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:43:59.659603  171911 cri.go:89] found id: ""
	I0903 23:43:59.659635  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.659643  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:43:59.659655  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:43:59.659713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:43:59.700514  171911 cri.go:89] found id: ""
	I0903 23:43:59.700553  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.700566  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:43:59.700576  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:43:59.700669  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:43:59.734778  171911 cri.go:89] found id: ""
	I0903 23:43:59.734805  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.734816  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:43:59.734824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:43:59.734884  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:43:59.769663  171911 cri.go:89] found id: ""
	I0903 23:43:59.769703  171911 logs.go:282] 0 containers: []
	W0903 23:43:59.769714  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:43:59.769727  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:43:59.769743  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:43:59.832033  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:43:59.832056  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:43:59.832075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:43:59.905304  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:43:59.905348  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:43:59.942790  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:43:59.942823  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:43:59.992617  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:43:59.992660  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.508378  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:02.525572  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:02.525652  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:02.561330  171911 cri.go:89] found id: ""
	I0903 23:44:02.561361  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.561369  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:02.561375  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:02.561461  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:02.595933  171911 cri.go:89] found id: ""
	I0903 23:44:02.595962  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.595970  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:02.595975  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:02.596041  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:02.628817  171911 cri.go:89] found id: ""
	I0903 23:44:02.628854  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.628865  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:02.628873  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:02.628944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:02.665027  171911 cri.go:89] found id: ""
	I0903 23:44:02.665060  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.665072  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:02.665079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:02.665143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:02.698721  171911 cri.go:89] found id: ""
	I0903 23:44:02.698752  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.698761  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:02.698768  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:02.698822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:02.736138  171911 cri.go:89] found id: ""
	I0903 23:44:02.736170  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.736180  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:02.736188  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:02.736254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:02.770089  171911 cri.go:89] found id: ""
	I0903 23:44:02.770120  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.770127  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:02.770134  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:02.770201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:02.805595  171911 cri.go:89] found id: ""
	I0903 23:44:02.805627  171911 logs.go:282] 0 containers: []
	W0903 23:44:02.805638  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:02.805650  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:02.805666  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:02.855714  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:02.855753  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:02.870817  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:02.870854  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:02.935987  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:02.936011  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:02.936025  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:03.013471  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:03.013513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:05.553522  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:05.570805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:05.570869  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:05.606023  171911 cri.go:89] found id: ""
	I0903 23:44:05.606061  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.606075  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:05.606084  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:05.606151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:05.640331  171911 cri.go:89] found id: ""
	I0903 23:44:05.640362  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.640374  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:05.640380  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:05.640455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:05.675579  171911 cri.go:89] found id: ""
	I0903 23:44:05.675613  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.675626  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:05.675634  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:05.675698  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:05.710190  171911 cri.go:89] found id: ""
	I0903 23:44:05.710219  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.710226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:05.710233  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:05.710292  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:05.745803  171911 cri.go:89] found id: ""
	I0903 23:44:05.745834  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.745843  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:05.745850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:05.745908  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:05.780095  171911 cri.go:89] found id: ""
	I0903 23:44:05.780126  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.780134  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:05.780141  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:05.780193  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:05.812816  171911 cri.go:89] found id: ""
	I0903 23:44:05.812849  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.812862  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:05.812870  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:05.812944  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:05.845992  171911 cri.go:89] found id: ""
	I0903 23:44:05.846024  171911 logs.go:282] 0 containers: []
	W0903 23:44:05.846032  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:05.846041  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:05.846053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:05.896122  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:05.896163  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:05.910777  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:05.910815  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:05.973743  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:05.973771  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:05.973784  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:06.047880  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:06.047924  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.588751  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:08.605926  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:08.605989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:08.639229  171911 cri.go:89] found id: ""
	I0903 23:44:08.639260  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.639268  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:08.639275  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:08.639332  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:08.673218  171911 cri.go:89] found id: ""
	I0903 23:44:08.673263  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.673274  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:08.673283  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:08.673353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:08.708635  171911 cri.go:89] found id: ""
	I0903 23:44:08.708665  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.708676  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:08.708685  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:08.708755  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:08.744277  171911 cri.go:89] found id: ""
	I0903 23:44:08.744304  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.744311  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:08.744318  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:08.744385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:08.778421  171911 cri.go:89] found id: ""
	I0903 23:44:08.778451  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.778469  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:08.778477  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:08.778541  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:08.815240  171911 cri.go:89] found id: ""
	I0903 23:44:08.815277  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.815290  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:08.815298  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:08.815371  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:08.849900  171911 cri.go:89] found id: ""
	I0903 23:44:08.849929  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.849936  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:08.849942  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:08.849993  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:08.885596  171911 cri.go:89] found id: ""
	I0903 23:44:08.885631  171911 logs.go:282] 0 containers: []
	W0903 23:44:08.885641  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:08.885651  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:08.885668  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:08.924882  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:08.924909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:08.976269  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:08.976304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:08.993447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:08.993483  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:09.069817  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:09.069845  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:09.069862  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:11.651779  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:11.668352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:11.668423  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:11.703206  171911 cri.go:89] found id: ""
	I0903 23:44:11.703243  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.703255  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:11.703264  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:11.703357  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:11.737323  171911 cri.go:89] found id: ""
	I0903 23:44:11.737367  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.737380  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:11.737402  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:11.737479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:11.771970  171911 cri.go:89] found id: ""
	I0903 23:44:11.772010  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.772021  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:11.772030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:11.772104  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:11.806342  171911 cri.go:89] found id: ""
	I0903 23:44:11.806386  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.806397  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:11.806406  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:11.806483  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:11.843136  171911 cri.go:89] found id: ""
	I0903 23:44:11.843170  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.843181  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:11.843189  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:11.843259  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:11.877246  171911 cri.go:89] found id: ""
	I0903 23:44:11.877285  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.877296  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:11.877306  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:11.877379  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:11.915257  171911 cri.go:89] found id: ""
	I0903 23:44:11.915295  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.915308  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:11.915317  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:11.915396  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:11.949271  171911 cri.go:89] found id: ""
	I0903 23:44:11.949300  171911 logs.go:282] 0 containers: []
	W0903 23:44:11.949310  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:11.949323  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:11.949342  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:11.962921  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:11.962954  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:12.025549  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:12.025580  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:12.025596  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:12.099077  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:12.099120  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:12.136408  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:12.136446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:14.686632  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:14.704032  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:14.704101  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:14.739046  171911 cri.go:89] found id: ""
	I0903 23:44:14.739076  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.739084  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:14.739091  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:14.739156  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:14.775028  171911 cri.go:89] found id: ""
	I0903 23:44:14.775066  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.775078  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:14.775087  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:14.775150  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:14.808896  171911 cri.go:89] found id: ""
	I0903 23:44:14.808928  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.808939  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:14.808947  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:14.809014  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:14.844967  171911 cri.go:89] found id: ""
	I0903 23:44:14.844998  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.845010  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:14.845018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:14.845087  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:14.878706  171911 cri.go:89] found id: ""
	I0903 23:44:14.878734  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.878742  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:14.878750  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:14.878824  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:14.914368  171911 cri.go:89] found id: ""
	I0903 23:44:14.914407  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.914420  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:14.914429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:14.914523  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:14.949846  171911 cri.go:89] found id: ""
	I0903 23:44:14.949873  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.949881  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:14.949888  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:14.949956  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:14.985479  171911 cri.go:89] found id: ""
	I0903 23:44:14.985511  171911 logs.go:282] 0 containers: []
	W0903 23:44:14.985522  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:14.985534  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:14.985550  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:15.036097  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:15.036141  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:15.050336  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:15.050365  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:15.116416  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:15.116439  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:15.116457  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:15.193453  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:15.193498  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:17.731284  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:17.748791  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:17.748854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:17.784857  171911 cri.go:89] found id: ""
	I0903 23:44:17.784884  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.784892  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:17.784897  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:17.784953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:17.819838  171911 cri.go:89] found id: ""
	I0903 23:44:17.819867  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.819875  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:17.819881  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:17.819932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:17.853453  171911 cri.go:89] found id: ""
	I0903 23:44:17.853482  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.853489  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:17.853496  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:17.853553  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:17.887886  171911 cri.go:89] found id: ""
	I0903 23:44:17.887915  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.887923  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:17.887930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:17.887985  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:17.923140  171911 cri.go:89] found id: ""
	I0903 23:44:17.923172  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.923183  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:17.923190  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:17.923258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:17.957595  171911 cri.go:89] found id: ""
	I0903 23:44:17.957625  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.957638  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:17.957647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:17.957717  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:17.990247  171911 cri.go:89] found id: ""
	I0903 23:44:17.990276  171911 logs.go:282] 0 containers: []
	W0903 23:44:17.990284  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:17.990290  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:17.990362  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:18.024643  171911 cri.go:89] found id: ""
	I0903 23:44:18.024673  171911 logs.go:282] 0 containers: []
	W0903 23:44:18.024685  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:18.024697  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:18.024713  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:18.076397  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:18.076436  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:18.090204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:18.090233  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:18.163020  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:18.163044  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:18.163059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:18.240276  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:18.240314  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:20.781710  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:20.798871  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:20.798939  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:20.833834  171911 cri.go:89] found id: ""
	I0903 23:44:20.833867  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.833875  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:20.833881  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:20.833936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:20.868536  171911 cri.go:89] found id: ""
	I0903 23:44:20.868569  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.868577  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:20.868583  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:20.868639  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:20.902513  171911 cri.go:89] found id: ""
	I0903 23:44:20.902546  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.902557  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:20.902570  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:20.902644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:20.935967  171911 cri.go:89] found id: ""
	I0903 23:44:20.935994  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.936001  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:20.936007  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:20.936070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:20.969967  171911 cri.go:89] found id: ""
	I0903 23:44:20.969995  171911 logs.go:282] 0 containers: []
	W0903 23:44:20.970003  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:20.970009  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:20.970067  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:21.005097  171911 cri.go:89] found id: ""
	I0903 23:44:21.005130  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.005149  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:21.005158  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:21.005231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:21.040315  171911 cri.go:89] found id: ""
	I0903 23:44:21.040350  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.040357  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:21.040364  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:21.040431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:21.075411  171911 cri.go:89] found id: ""
	I0903 23:44:21.075447  171911 logs.go:282] 0 containers: []
	W0903 23:44:21.075456  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:21.075466  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:21.075478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:21.125281  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:21.125322  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:21.139605  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:21.139635  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:21.203960  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:21.203986  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:21.204004  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:21.278167  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:21.278211  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:23.820132  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:23.839119  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:23.839184  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:23.883827  171911 cri.go:89] found id: ""
	I0903 23:44:23.883864  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.883876  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:23.883884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:23.883943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:23.929729  171911 cri.go:89] found id: ""
	I0903 23:44:23.929756  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.929765  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:23.929771  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:23.929822  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:23.962676  171911 cri.go:89] found id: ""
	I0903 23:44:23.962708  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.962716  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:23.962722  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:23.962778  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:23.995464  171911 cri.go:89] found id: ""
	I0903 23:44:23.995505  171911 logs.go:282] 0 containers: []
	W0903 23:44:23.995516  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:23.995522  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:23.995586  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:24.030690  171911 cri.go:89] found id: ""
	I0903 23:44:24.030718  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.030726  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:24.030733  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:24.030791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:24.064311  171911 cri.go:89] found id: ""
	I0903 23:44:24.064338  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.064346  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:24.064352  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:24.064408  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:24.098888  171911 cri.go:89] found id: ""
	I0903 23:44:24.098917  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.098924  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:24.098930  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:24.098990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:24.135030  171911 cri.go:89] found id: ""
	I0903 23:44:24.135057  171911 logs.go:282] 0 containers: []
	W0903 23:44:24.135064  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:24.135074  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:24.135086  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:24.185228  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:24.185266  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:24.198908  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:24.198937  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:24.260291  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:24.260337  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:24.260355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:24.337581  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:24.337620  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:26.876959  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:26.893615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:26.893679  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:26.926745  171911 cri.go:89] found id: ""
	I0903 23:44:26.926776  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.926784  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:26.926791  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:26.926848  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:26.959697  171911 cri.go:89] found id: ""
	I0903 23:44:26.959727  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.959735  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:26.959742  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:26.959795  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:26.991963  171911 cri.go:89] found id: ""
	I0903 23:44:26.991996  171911 logs.go:282] 0 containers: []
	W0903 23:44:26.992004  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:26.992011  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:26.992064  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:27.025939  171911 cri.go:89] found id: ""
	I0903 23:44:27.025978  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.025989  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:27.025997  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:27.026065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:27.058572  171911 cri.go:89] found id: ""
	I0903 23:44:27.058598  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.058606  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:27.058612  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:27.058666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:27.092277  171911 cri.go:89] found id: ""
	I0903 23:44:27.092309  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.092318  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:27.092324  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:27.092385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:27.127742  171911 cri.go:89] found id: ""
	I0903 23:44:27.127777  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.127789  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:27.127798  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:27.127872  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:27.162425  171911 cri.go:89] found id: ""
	I0903 23:44:27.162463  171911 logs.go:282] 0 containers: []
	W0903 23:44:27.162474  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:27.162487  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:27.162503  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:27.213126  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:27.213165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:27.226983  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:27.227013  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:27.293122  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:27.293152  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:27.293169  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:27.368497  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:27.368538  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:29.907183  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:29.924079  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:29.924172  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:29.957813  171911 cri.go:89] found id: ""
	I0903 23:44:29.957843  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.957851  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:29.957857  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:29.957919  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:29.992782  171911 cri.go:89] found id: ""
	I0903 23:44:29.992812  171911 logs.go:282] 0 containers: []
	W0903 23:44:29.992819  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:29.992826  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:29.992888  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:30.026629  171911 cri.go:89] found id: ""
	I0903 23:44:30.026664  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.026674  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:30.026682  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:30.026756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:30.060035  171911 cri.go:89] found id: ""
	I0903 23:44:30.060074  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.060083  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:30.060092  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:30.060154  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:30.101281  171911 cri.go:89] found id: ""
	I0903 23:44:30.101319  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.101330  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:30.101338  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:30.101419  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:30.146884  171911 cri.go:89] found id: ""
	I0903 23:44:30.146911  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.146918  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:30.146925  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:30.146989  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:30.180988  171911 cri.go:89] found id: ""
	I0903 23:44:30.181016  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.181024  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:30.181030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:30.181103  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:30.214648  171911 cri.go:89] found id: ""
	I0903 23:44:30.214679  171911 logs.go:282] 0 containers: []
	W0903 23:44:30.214687  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:30.214696  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:30.214709  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:30.262757  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:30.262799  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:30.283299  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:30.283331  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:30.366919  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:30.366945  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:30.366959  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:30.442612  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:30.442654  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:32.981733  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:32.999850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:32.999930  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:33.040618  171911 cri.go:89] found id: ""
	I0903 23:44:33.040653  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.040664  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:33.040671  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:33.040738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:33.081786  171911 cri.go:89] found id: ""
	I0903 23:44:33.081818  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.081829  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:33.081836  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:33.081906  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:33.125847  171911 cri.go:89] found id: ""
	I0903 23:44:33.125878  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.125888  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:33.125896  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:33.125962  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:33.167437  171911 cri.go:89] found id: ""
	I0903 23:44:33.167465  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.167473  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:33.167481  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:33.167557  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:33.208145  171911 cri.go:89] found id: ""
	I0903 23:44:33.208177  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.208185  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:33.208192  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:33.208248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:33.250045  171911 cri.go:89] found id: ""
	I0903 23:44:33.250074  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.250081  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:33.250087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:33.250139  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:33.289576  171911 cri.go:89] found id: ""
	I0903 23:44:33.289607  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.289615  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:33.289621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:33.289676  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:33.325452  171911 cri.go:89] found id: ""
	I0903 23:44:33.325485  171911 logs.go:282] 0 containers: []
	W0903 23:44:33.325493  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:33.325503  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:33.325515  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:33.403967  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:33.404018  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:33.441581  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:33.441619  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:33.488744  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:33.488794  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:33.502603  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:33.502648  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:33.567447  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.069781  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:36.093945  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:36.094023  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:36.138900  171911 cri.go:89] found id: ""
	I0903 23:44:36.138929  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.138940  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:36.138950  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:36.139016  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:36.174814  171911 cri.go:89] found id: ""
	I0903 23:44:36.174841  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.174849  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:36.174855  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:36.174918  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:36.211574  171911 cri.go:89] found id: ""
	I0903 23:44:36.211604  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.211611  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:36.211618  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:36.211670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:36.245780  171911 cri.go:89] found id: ""
	I0903 23:44:36.245812  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.245823  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:36.245830  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:36.245886  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:36.280576  171911 cri.go:89] found id: ""
	I0903 23:44:36.280606  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.280614  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:36.280620  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:36.280674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:36.315469  171911 cri.go:89] found id: ""
	I0903 23:44:36.315504  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.315515  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:36.315524  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:36.315582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:36.349983  171911 cri.go:89] found id: ""
	I0903 23:44:36.350018  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.350027  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:36.350033  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:36.350083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:36.384827  171911 cri.go:89] found id: ""
	I0903 23:44:36.384857  171911 logs.go:282] 0 containers: []
	W0903 23:44:36.384866  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:36.384877  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:36.384896  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:36.398999  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:36.399029  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:36.467458  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:36.467492  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:36.467507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:36.546881  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:36.546922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:36.584400  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:36.584437  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.135283  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:39.152700  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:39.152762  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:39.187286  171911 cri.go:89] found id: ""
	I0903 23:44:39.187333  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.187344  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:39.187351  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:39.187418  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:39.222904  171911 cri.go:89] found id: ""
	I0903 23:44:39.222932  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.222940  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:39.222946  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:39.223001  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:39.256820  171911 cri.go:89] found id: ""
	I0903 23:44:39.256849  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.256860  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:39.256867  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:39.256936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:39.290701  171911 cri.go:89] found id: ""
	I0903 23:44:39.290732  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.290742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:39.290748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:39.290814  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:39.325458  171911 cri.go:89] found id: ""
	I0903 23:44:39.325494  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.325505  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:39.325513  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:39.325577  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:39.358959  171911 cri.go:89] found id: ""
	I0903 23:44:39.358988  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.358996  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:39.359002  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:39.359070  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:39.394031  171911 cri.go:89] found id: ""
	I0903 23:44:39.394058  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.394066  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:39.394072  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:39.394135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:39.428921  171911 cri.go:89] found id: ""
	I0903 23:44:39.428950  171911 logs.go:282] 0 containers: []
	W0903 23:44:39.428961  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:39.428973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:39.428992  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:39.478303  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:39.478346  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:39.492136  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:39.492165  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:39.556474  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:39.556499  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:39.556512  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:39.630384  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:39.630421  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:42.169783  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:42.186331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:42.186392  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:42.220630  171911 cri.go:89] found id: ""
	I0903 23:44:42.220658  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.220669  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:42.220678  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:42.220751  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:42.256274  171911 cri.go:89] found id: ""
	I0903 23:44:42.256310  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.256321  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:42.256329  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:42.256387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:42.289958  171911 cri.go:89] found id: ""
	I0903 23:44:42.289988  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.289998  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:42.290006  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:42.290065  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:42.322425  171911 cri.go:89] found id: ""
	I0903 23:44:42.322453  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.322464  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:42.322473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:42.322537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:42.357459  171911 cri.go:89] found id: ""
	I0903 23:44:42.357494  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.357503  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:42.357509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:42.357588  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:42.390807  171911 cri.go:89] found id: ""
	I0903 23:44:42.390837  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.390845  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:42.390851  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:42.390924  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:42.424548  171911 cri.go:89] found id: ""
	I0903 23:44:42.424579  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.424590  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:42.424598  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:42.424667  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:42.459215  171911 cri.go:89] found id: ""
	I0903 23:44:42.459250  171911 logs.go:282] 0 containers: []
	W0903 23:44:42.459261  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:42.459274  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:42.459290  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:42.505525  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:42.505560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:42.519712  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:42.519744  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:42.583576  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:42.583603  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:42.583618  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:42.660899  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:42.660936  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.200707  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:45.217299  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:45.217372  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:45.252045  171911 cri.go:89] found id: ""
	I0903 23:44:45.252073  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.252081  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:45.252087  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:45.252155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:45.287247  171911 cri.go:89] found id: ""
	I0903 23:44:45.287281  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.287289  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:45.287296  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:45.287353  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:45.320423  171911 cri.go:89] found id: ""
	I0903 23:44:45.320450  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.320457  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:45.320463  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:45.320517  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:45.353147  171911 cri.go:89] found id: ""
	I0903 23:44:45.353179  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.353187  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:45.353193  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:45.353261  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:45.387052  171911 cri.go:89] found id: ""
	I0903 23:44:45.387080  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.387089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:45.387096  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:45.387151  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:45.422621  171911 cri.go:89] found id: ""
	I0903 23:44:45.422651  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.422659  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:45.422666  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:45.422734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:45.457224  171911 cri.go:89] found id: ""
	I0903 23:44:45.457258  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.457266  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:45.457274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:45.457339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:45.490659  171911 cri.go:89] found id: ""
	I0903 23:44:45.490685  171911 logs.go:282] 0 containers: []
	W0903 23:44:45.490693  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:45.490706  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:45.490729  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:45.556871  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:45.556894  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:45.556909  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:45.628062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:45.628101  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:45.666937  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:45.666977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:45.713545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:45.713580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.227552  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:48.245044  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:48.245118  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:48.279490  171911 cri.go:89] found id: ""
	I0903 23:44:48.279519  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.279529  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:48.279537  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:48.279621  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:48.313971  171911 cri.go:89] found id: ""
	I0903 23:44:48.313998  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.314006  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:48.314012  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:48.314076  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:48.349729  171911 cri.go:89] found id: ""
	I0903 23:44:48.349765  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.349773  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:48.349779  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:48.349843  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:48.384104  171911 cri.go:89] found id: ""
	I0903 23:44:48.384132  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.384140  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:48.384147  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:48.384210  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:48.418534  171911 cri.go:89] found id: ""
	I0903 23:44:48.418569  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.418581  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:48.418589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:48.418656  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:48.452604  171911 cri.go:89] found id: ""
	I0903 23:44:48.452632  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.452640  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:48.452647  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:48.452711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:48.485587  171911 cri.go:89] found id: ""
	I0903 23:44:48.485618  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.485629  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:48.485636  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:48.485701  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:48.518840  171911 cri.go:89] found id: ""
	I0903 23:44:48.518865  171911 logs.go:282] 0 containers: []
	W0903 23:44:48.518876  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:48.518890  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:48.518906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:48.566332  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:48.566368  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:48.580074  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:48.580103  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:48.646139  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:48.646163  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:48.646177  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:48.721508  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:48.721551  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.261729  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:51.277615  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:51.277688  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:51.311728  171911 cri.go:89] found id: ""
	I0903 23:44:51.311758  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.311767  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:51.311773  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:51.311841  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:51.346364  171911 cri.go:89] found id: ""
	I0903 23:44:51.346394  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.346402  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:51.346408  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:51.346467  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:51.380196  171911 cri.go:89] found id: ""
	I0903 23:44:51.380233  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.380249  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:51.380259  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:51.380331  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:51.414829  171911 cri.go:89] found id: ""
	I0903 23:44:51.414861  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.414869  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:51.414875  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:51.414943  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:51.448741  171911 cri.go:89] found id: ""
	I0903 23:44:51.448779  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.448792  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:51.448801  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:51.448865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:51.484499  171911 cri.go:89] found id: ""
	I0903 23:44:51.484537  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.484545  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:51.484552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:51.484605  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:51.518538  171911 cri.go:89] found id: ""
	I0903 23:44:51.518568  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.518580  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:51.518589  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:51.518649  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:51.560124  171911 cri.go:89] found id: ""
	I0903 23:44:51.560158  171911 logs.go:282] 0 containers: []
	W0903 23:44:51.560168  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:51.560193  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:51.560207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:51.636716  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:51.636760  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:51.674322  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:51.674355  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:51.723819  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:51.723856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:51.737446  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:51.737478  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:51.800575  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.300746  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:54.317060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:54.317135  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:54.350356  171911 cri.go:89] found id: ""
	I0903 23:44:54.350382  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.350389  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:54.350396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:54.350458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:54.386548  171911 cri.go:89] found id: ""
	I0903 23:44:54.386577  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.386586  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:54.386593  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:54.386647  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:54.423360  171911 cri.go:89] found id: ""
	I0903 23:44:54.423388  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.423395  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:54.423407  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:54.423458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:54.458673  171911 cri.go:89] found id: ""
	I0903 23:44:54.458701  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.458709  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:54.458716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:54.458781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:54.491692  171911 cri.go:89] found id: ""
	I0903 23:44:54.491726  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.491738  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:54.491746  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:54.491809  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:54.524500  171911 cri.go:89] found id: ""
	I0903 23:44:54.524530  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.524543  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:54.524550  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:54.524614  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:54.558644  171911 cri.go:89] found id: ""
	I0903 23:44:54.558676  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.558688  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:54.558696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:54.558773  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:54.592814  171911 cri.go:89] found id: ""
	I0903 23:44:54.592841  171911 logs.go:282] 0 containers: []
	W0903 23:44:54.592851  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:54.592863  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:54.592879  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:54.642538  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:54.642572  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:54.656435  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:54.656468  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:54.721260  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:54.721286  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:54.721304  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:54.798283  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:54.798323  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:44:57.337294  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:44:57.353760  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:44:57.353842  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:44:57.387108  171911 cri.go:89] found id: ""
	I0903 23:44:57.387136  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.387146  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:44:57.387153  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:44:57.387219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:44:57.421245  171911 cri.go:89] found id: ""
	I0903 23:44:57.421273  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.421283  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:44:57.421291  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:44:57.421367  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:44:57.455403  171911 cri.go:89] found id: ""
	I0903 23:44:57.455431  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.455441  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:44:57.455450  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:44:57.455510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:44:57.487825  171911 cri.go:89] found id: ""
	I0903 23:44:57.487860  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.487871  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:44:57.487880  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:44:57.487935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:44:57.522048  171911 cri.go:89] found id: ""
	I0903 23:44:57.522073  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.522081  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:44:57.522087  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:44:57.522140  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:44:57.555520  171911 cri.go:89] found id: ""
	I0903 23:44:57.555545  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.555553  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:44:57.555560  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:44:57.555622  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:44:57.588895  171911 cri.go:89] found id: ""
	I0903 23:44:57.588924  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.588933  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:44:57.588941  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:44:57.589002  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:44:57.623152  171911 cri.go:89] found id: ""
	I0903 23:44:57.623190  171911 logs.go:282] 0 containers: []
	W0903 23:44:57.623198  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:44:57.623207  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:44:57.623217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:44:57.672898  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:44:57.672938  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:44:57.686578  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:44:57.686611  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:44:57.750436  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:44:57.750467  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:44:57.750485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:44:57.830779  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:44:57.830829  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.371014  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:00.387297  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:00.387414  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:00.420632  171911 cri.go:89] found id: ""
	I0903 23:45:00.420662  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.420670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:00.420676  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:00.420729  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:00.453824  171911 cri.go:89] found id: ""
	I0903 23:45:00.453852  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.453860  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:00.453866  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:00.453917  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:00.488618  171911 cri.go:89] found id: ""
	I0903 23:45:00.488650  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.488661  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:00.488669  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:00.488738  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:00.522545  171911 cri.go:89] found id: ""
	I0903 23:45:00.522579  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.522587  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:00.522595  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:00.522655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:00.555419  171911 cri.go:89] found id: ""
	I0903 23:45:00.555445  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.555453  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:00.555459  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:00.555515  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:00.588742  171911 cri.go:89] found id: ""
	I0903 23:45:00.588777  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.588790  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:00.588799  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:00.588876  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:00.621164  171911 cri.go:89] found id: ""
	I0903 23:45:00.621194  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.621205  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:00.621212  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:00.621287  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:00.652140  171911 cri.go:89] found id: ""
	I0903 23:45:00.652167  171911 logs.go:282] 0 containers: []
	W0903 23:45:00.652178  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:00.652191  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:00.652206  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:00.733518  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:00.733560  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:00.770455  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:00.770489  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:00.819129  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:00.819161  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:00.832460  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:00.832492  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:00.895930  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.397643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:03.414370  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:03.414441  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:03.448753  171911 cri.go:89] found id: ""
	I0903 23:45:03.448787  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.448795  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:03.448802  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:03.448860  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:03.484668  171911 cri.go:89] found id: ""
	I0903 23:45:03.484696  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.484703  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:03.484709  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:03.484763  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:03.517157  171911 cri.go:89] found id: ""
	I0903 23:45:03.517184  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.517191  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:03.517197  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:03.517250  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:03.552220  171911 cri.go:89] found id: ""
	I0903 23:45:03.552246  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.552255  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:03.552262  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:03.552328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:03.585731  171911 cri.go:89] found id: ""
	I0903 23:45:03.585764  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.585774  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:03.585783  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:03.585854  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:03.619396  171911 cri.go:89] found id: ""
	I0903 23:45:03.619425  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.619433  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:03.619439  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:03.619503  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:03.653461  171911 cri.go:89] found id: ""
	I0903 23:45:03.653489  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.653500  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:03.653509  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:03.653562  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:03.690075  171911 cri.go:89] found id: ""
	I0903 23:45:03.690102  171911 logs.go:282] 0 containers: []
	W0903 23:45:03.690112  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:03.690123  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:03.690139  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:03.742271  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:03.742305  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:03.755513  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:03.755548  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:03.817702  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:03.817734  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:03.817758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:03.894336  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:03.894377  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:06.433897  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:06.450322  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:06.450386  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:06.482782  171911 cri.go:89] found id: ""
	I0903 23:45:06.482810  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.482818  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:06.482824  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:06.482878  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:06.516065  171911 cri.go:89] found id: ""
	I0903 23:45:06.516098  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.516106  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:06.516112  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:06.516164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:06.548668  171911 cri.go:89] found id: ""
	I0903 23:45:06.548695  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.548703  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:06.548710  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:06.548765  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:06.580287  171911 cri.go:89] found id: ""
	I0903 23:45:06.580316  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.580324  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:06.580331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:06.580385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:06.613698  171911 cri.go:89] found id: ""
	I0903 23:45:06.613728  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.613736  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:06.613742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:06.613798  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:06.648492  171911 cri.go:89] found id: ""
	I0903 23:45:06.648520  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.648531  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:06.648539  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:06.648591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:06.682079  171911 cri.go:89] found id: ""
	I0903 23:45:06.682105  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.682114  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:06.682123  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:06.682182  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:06.717523  171911 cri.go:89] found id: ""
	I0903 23:45:06.717551  171911 logs.go:282] 0 containers: []
	W0903 23:45:06.717559  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:06.717568  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:06.717580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:06.766524  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:06.766557  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:06.779931  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:06.779960  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:06.843183  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:06.843204  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:06.843217  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:06.919233  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:06.919270  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.456643  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:09.475777  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:09.475855  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:09.516030  171911 cri.go:89] found id: ""
	I0903 23:45:09.516066  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.516078  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:09.516086  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:09.516155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:09.556025  171911 cri.go:89] found id: ""
	I0903 23:45:09.556058  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.556071  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:09.556080  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:09.556145  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:09.596343  171911 cri.go:89] found id: ""
	I0903 23:45:09.596375  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.596384  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:09.596393  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:09.596456  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:09.634286  171911 cri.go:89] found id: ""
	I0903 23:45:09.634323  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.634330  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:09.634336  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:09.634387  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:09.667579  171911 cri.go:89] found id: ""
	I0903 23:45:09.667617  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.667629  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:09.667637  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:09.667709  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:09.702631  171911 cri.go:89] found id: ""
	I0903 23:45:09.702661  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.702670  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:09.702677  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:09.702744  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:09.736481  171911 cri.go:89] found id: ""
	I0903 23:45:09.736513  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.736522  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:09.736528  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:09.736594  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:09.768392  171911 cri.go:89] found id: ""
	I0903 23:45:09.768420  171911 logs.go:282] 0 containers: []
	W0903 23:45:09.768428  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:09.768438  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:09.768454  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:09.804233  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:09.804262  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:09.854916  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:09.854951  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:09.868290  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:09.868326  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:09.937659  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:09.937686  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:09.937702  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:12.515352  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:12.532069  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:12.532138  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:12.566307  171911 cri.go:89] found id: ""
	I0903 23:45:12.566347  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.566356  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:12.566361  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:12.566413  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:12.600883  171911 cri.go:89] found id: ""
	I0903 23:45:12.600911  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.600919  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:12.600925  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:12.600976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:12.634831  171911 cri.go:89] found id: ""
	I0903 23:45:12.634860  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.634868  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:12.634874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:12.634932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:12.668965  171911 cri.go:89] found id: ""
	I0903 23:45:12.668993  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.669002  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:12.669008  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:12.669061  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:12.702632  171911 cri.go:89] found id: ""
	I0903 23:45:12.702662  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.702670  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:12.702676  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:12.702734  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:12.736957  171911 cri.go:89] found id: ""
	I0903 23:45:12.736994  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.737005  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:12.737013  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:12.737096  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:12.769324  171911 cri.go:89] found id: ""
	I0903 23:45:12.769353  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.769361  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:12.769367  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:12.769433  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:12.801706  171911 cri.go:89] found id: ""
	I0903 23:45:12.801731  171911 logs.go:282] 0 containers: []
	W0903 23:45:12.801738  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:12.801747  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:12.801758  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:12.850449  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:12.850485  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:12.864235  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:12.864263  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:12.928347  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:12.928372  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:12.928385  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:13.002530  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:13.002569  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:15.541753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:15.558031  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:15.558098  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:15.590544  171911 cri.go:89] found id: ""
	I0903 23:45:15.590590  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.590608  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:15.590618  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:15.590681  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:15.623172  171911 cri.go:89] found id: ""
	I0903 23:45:15.623206  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.623214  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:15.623220  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:15.623271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:15.666374  171911 cri.go:89] found id: ""
	I0903 23:45:15.666413  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.666424  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:15.666432  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:15.666500  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:15.700153  171911 cri.go:89] found id: ""
	I0903 23:45:15.700188  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.700196  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:15.700203  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:15.700258  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:15.734346  171911 cri.go:89] found id: ""
	I0903 23:45:15.734379  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.734391  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:15.734401  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:15.734468  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:15.768125  171911 cri.go:89] found id: ""
	I0903 23:45:15.768151  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.768160  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:15.768166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:15.768219  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:15.802055  171911 cri.go:89] found id: ""
	I0903 23:45:15.802085  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.802093  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:15.802101  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:15.802155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:15.835742  171911 cri.go:89] found id: ""
	I0903 23:45:15.835775  171911 logs.go:282] 0 containers: []
	W0903 23:45:15.835785  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:15.835796  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:15.835809  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:15.887302  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:15.887339  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:15.900589  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:15.900616  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:15.963821  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:15.963850  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:15.963867  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:16.041873  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:16.041910  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:18.579975  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:18.596552  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:18.596644  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:18.637122  171911 cri.go:89] found id: ""
	I0903 23:45:18.637150  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.637159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:18.637168  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:18.637231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:18.683926  171911 cri.go:89] found id: ""
	I0903 23:45:18.683965  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.683976  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:18.683984  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:18.684143  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:18.724297  171911 cri.go:89] found id: ""
	I0903 23:45:18.724326  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.724337  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:18.724356  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:18.724424  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:18.767543  171911 cri.go:89] found id: ""
	I0903 23:45:18.767585  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.767594  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:18.767601  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:18.767666  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:18.808984  171911 cri.go:89] found id: ""
	I0903 23:45:18.809023  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.809034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:18.809042  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:18.809125  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:18.843616  171911 cri.go:89] found id: ""
	I0903 23:45:18.843651  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.843662  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:18.843670  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:18.843772  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:18.878089  171911 cri.go:89] found id: ""
	I0903 23:45:18.878117  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.878125  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:18.878131  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:18.878199  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:18.913557  171911 cri.go:89] found id: ""
	I0903 23:45:18.913590  171911 logs.go:282] 0 containers: []
	W0903 23:45:18.913602  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:18.913613  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:18.913629  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:18.964473  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:18.964511  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:18.977841  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:18.977868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:19.041151  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:19.041175  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:19.041190  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:19.114112  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:19.114166  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:21.655099  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:21.671751  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:21.671826  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:21.705950  171911 cri.go:89] found id: ""
	I0903 23:45:21.705985  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.705993  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:21.706000  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:21.706066  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:21.745098  171911 cri.go:89] found id: ""
	I0903 23:45:21.745125  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.745134  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:21.745139  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:21.745212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:21.787214  171911 cri.go:89] found id: ""
	I0903 23:45:21.787246  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.787259  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:21.787267  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:21.787340  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:21.825966  171911 cri.go:89] found id: ""
	I0903 23:45:21.825999  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.826009  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:21.826023  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:21.826094  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:21.858874  171911 cri.go:89] found id: ""
	I0903 23:45:21.858909  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.858920  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:21.858928  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:21.858990  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:21.892820  171911 cri.go:89] found id: ""
	I0903 23:45:21.892851  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.892862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:21.892869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:21.892938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:21.927139  171911 cri.go:89] found id: ""
	I0903 23:45:21.927167  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.927174  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:21.927180  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:21.927242  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:21.961202  171911 cri.go:89] found id: ""
	I0903 23:45:21.961235  171911 logs.go:282] 0 containers: []
	W0903 23:45:21.961247  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:21.961259  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:21.961274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:22.034253  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:22.034307  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:22.081973  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:22.082014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:22.136441  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:22.136507  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:22.153988  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:22.154027  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:22.218718  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:24.718932  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:24.735304  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:24.735366  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:24.769484  171911 cri.go:89] found id: ""
	I0903 23:45:24.769526  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.769534  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:24.769541  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:24.769602  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:24.804478  171911 cri.go:89] found id: ""
	I0903 23:45:24.804512  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.804523  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:24.804531  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:24.804616  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:24.839941  171911 cri.go:89] found id: ""
	I0903 23:45:24.839967  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.839974  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:24.839980  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:24.840043  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:24.872589  171911 cri.go:89] found id: ""
	I0903 23:45:24.872631  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.872641  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:24.872650  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:24.872713  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:24.906281  171911 cri.go:89] found id: ""
	I0903 23:45:24.906312  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.906321  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:24.906327  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:24.906381  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:24.940855  171911 cri.go:89] found id: ""
	I0903 23:45:24.940891  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.940902  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:24.940910  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:24.940979  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:24.973046  171911 cri.go:89] found id: ""
	I0903 23:45:24.973075  171911 logs.go:282] 0 containers: []
	W0903 23:45:24.973084  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:24.973091  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:24.973160  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:25.006986  171911 cri.go:89] found id: ""
	I0903 23:45:25.007015  171911 logs.go:282] 0 containers: []
	W0903 23:45:25.007026  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:25.007038  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:25.007054  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:25.057037  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:25.057075  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:25.070713  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:25.070741  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:25.135104  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:25.135129  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:25.135142  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:25.211776  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:25.211816  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:27.750263  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:27.766962  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:27.767039  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:27.809102  171911 cri.go:89] found id: ""
	I0903 23:45:27.809134  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.809142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:27.809149  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:27.809201  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:27.852918  171911 cri.go:89] found id: ""
	I0903 23:45:27.852946  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.852954  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:27.852961  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:27.853025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:27.908523  171911 cri.go:89] found id: ""
	I0903 23:45:27.908554  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.908561  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:27.908566  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:27.908627  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:27.941105  171911 cri.go:89] found id: ""
	I0903 23:45:27.941136  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.941144  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:27.941150  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:27.941204  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:27.974030  171911 cri.go:89] found id: ""
	I0903 23:45:27.974064  171911 logs.go:282] 0 containers: []
	W0903 23:45:27.974075  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:27.974082  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:27.974149  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:28.007829  171911 cri.go:89] found id: ""
	I0903 23:45:28.007857  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.007867  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:28.007874  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:28.007936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:28.050575  171911 cri.go:89] found id: ""
	I0903 23:45:28.050614  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.050622  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:28.050629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:28.050684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:28.085777  171911 cri.go:89] found id: ""
	I0903 23:45:28.085809  171911 logs.go:282] 0 containers: []
	W0903 23:45:28.085817  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:28.085826  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:28.085838  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:28.150751  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:28.150778  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:28.150792  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:28.223955  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:28.224000  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:28.262972  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:28.262999  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:28.311545  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:28.311580  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:30.827970  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:30.844742  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:30.844805  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:30.880412  171911 cri.go:89] found id: ""
	I0903 23:45:30.880453  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.880468  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:30.880476  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:30.880549  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:30.913830  171911 cri.go:89] found id: ""
	I0903 23:45:30.913858  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.913867  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:30.913872  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:30.913935  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:30.946611  171911 cri.go:89] found id: ""
	I0903 23:45:30.946641  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.946650  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:30.946656  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:30.946711  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:30.980152  171911 cri.go:89] found id: ""
	I0903 23:45:30.980183  171911 logs.go:282] 0 containers: []
	W0903 23:45:30.980193  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:30.980201  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:30.980271  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:31.015814  171911 cri.go:89] found id: ""
	I0903 23:45:31.015845  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.015856  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:31.015863  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:31.015932  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:31.050513  171911 cri.go:89] found id: ""
	I0903 23:45:31.050543  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.050555  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:31.050562  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:31.050636  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:31.083766  171911 cri.go:89] found id: ""
	I0903 23:45:31.083791  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.083798  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:31.083805  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:31.083864  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:31.117858  171911 cri.go:89] found id: ""
	I0903 23:45:31.117886  171911 logs.go:282] 0 containers: []
	W0903 23:45:31.117893  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:31.117903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:31.117922  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:31.131404  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:31.131433  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:31.195245  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:31.195275  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:31.195295  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:31.271630  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:31.271671  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:31.310746  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:31.310780  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:33.861848  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:33.878672  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:33.878742  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:33.911344  171911 cri.go:89] found id: ""
	I0903 23:45:33.911377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.911388  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:33.911396  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:33.911458  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:33.948348  171911 cri.go:89] found id: ""
	I0903 23:45:33.948377  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.948385  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:33.948391  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:33.948455  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:33.981680  171911 cri.go:89] found id: ""
	I0903 23:45:33.981710  171911 logs.go:282] 0 containers: []
	W0903 23:45:33.981722  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:33.981730  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:33.981796  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:34.013721  171911 cri.go:89] found id: ""
	I0903 23:45:34.013747  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.013755  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:34.013762  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:34.013827  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:34.047612  171911 cri.go:89] found id: ""
	I0903 23:45:34.047644  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.047654  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:34.047661  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:34.047720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:34.081680  171911 cri.go:89] found id: ""
	I0903 23:45:34.081714  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.081725  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:34.081734  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:34.081802  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:34.117208  171911 cri.go:89] found id: ""
	I0903 23:45:34.117247  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.117258  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:34.117268  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:34.117339  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:34.150598  171911 cri.go:89] found id: ""
	I0903 23:45:34.150626  171911 logs.go:282] 0 containers: []
	W0903 23:45:34.150634  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:34.150644  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:34.150655  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:34.199612  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:34.199652  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:34.213484  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:34.213513  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:34.276337  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:34.276358  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:34.276380  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:34.347780  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:34.347822  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:36.885583  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:36.902360  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:36.902439  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:36.936103  171911 cri.go:89] found id: ""
	I0903 23:45:36.936133  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.936142  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:36.936148  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:36.936212  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:36.969146  171911 cri.go:89] found id: ""
	I0903 23:45:36.969173  171911 logs.go:282] 0 containers: []
	W0903 23:45:36.969180  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:36.969186  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:36.969248  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:37.002284  171911 cri.go:89] found id: ""
	I0903 23:45:37.002314  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.002324  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:37.002331  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:37.002385  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:37.034701  171911 cri.go:89] found id: ""
	I0903 23:45:37.034731  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.034741  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:37.034749  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:37.034815  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:37.067766  171911 cri.go:89] found id: ""
	I0903 23:45:37.067798  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.067810  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:37.067819  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:37.067887  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:37.100402  171911 cri.go:89] found id: ""
	I0903 23:45:37.100431  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.100439  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:37.100445  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:37.100495  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:37.134783  171911 cri.go:89] found id: ""
	I0903 23:45:37.134814  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.134822  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:37.134828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:37.134892  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:37.168715  171911 cri.go:89] found id: ""
	I0903 23:45:37.168746  171911 logs.go:282] 0 containers: []
	W0903 23:45:37.168753  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:37.168768  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:37.168781  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:37.239216  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:37.239259  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:37.278941  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:37.278977  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:37.327168  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:37.327207  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:37.340806  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:37.340837  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:37.402460  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:39.902717  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:39.919140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:39.919211  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:39.952379  171911 cri.go:89] found id: ""
	I0903 23:45:39.952407  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.952421  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:39.952428  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:39.952510  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:39.986646  171911 cri.go:89] found id: ""
	I0903 23:45:39.986674  171911 logs.go:282] 0 containers: []
	W0903 23:45:39.986682  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:39.986688  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:39.986750  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:40.019946  171911 cri.go:89] found id: ""
	I0903 23:45:40.019984  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.019995  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:40.020004  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:40.020075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:40.051084  171911 cri.go:89] found id: ""
	I0903 23:45:40.051120  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.051131  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:40.051139  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:40.051198  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:40.084431  171911 cri.go:89] found id: ""
	I0903 23:45:40.084471  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.084485  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:40.084493  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:40.084590  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:40.117261  171911 cri.go:89] found id: ""
	I0903 23:45:40.117289  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.117298  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:40.117305  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:40.117356  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:40.149940  171911 cri.go:89] found id: ""
	I0903 23:45:40.149976  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.149983  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:40.149989  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:40.150049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:40.185787  171911 cri.go:89] found id: ""
	I0903 23:45:40.185819  171911 logs.go:282] 0 containers: []
	W0903 23:45:40.185828  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:40.185838  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:40.185849  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:40.236114  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:40.236151  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:40.249810  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:40.249842  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:40.315354  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:40.315385  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:40.315402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:40.391973  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:40.392014  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:42.929523  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:42.946789  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:42.946852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:42.981168  171911 cri.go:89] found id: ""
	I0903 23:45:42.981202  171911 logs.go:282] 0 containers: []
	W0903 23:45:42.981214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:42.981223  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:42.981290  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:43.016160  171911 cri.go:89] found id: ""
	I0903 23:45:43.016191  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.016202  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:43.016210  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:43.016277  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:43.052374  171911 cri.go:89] found id: ""
	I0903 23:45:43.052407  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.052415  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:43.052421  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:43.052490  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:43.087466  171911 cri.go:89] found id: ""
	I0903 23:45:43.087492  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.087499  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:43.087506  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:43.087578  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:43.121733  171911 cri.go:89] found id: ""
	I0903 23:45:43.121770  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.121780  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:43.121786  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:43.121852  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:43.155089  171911 cri.go:89] found id: ""
	I0903 23:45:43.155120  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.155129  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:43.155136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:43.155208  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:43.187081  171911 cri.go:89] found id: ""
	I0903 23:45:43.187113  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.187124  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:43.187132  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:43.187206  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:43.221988  171911 cri.go:89] found id: ""
	I0903 23:45:43.222020  171911 logs.go:282] 0 containers: []
	W0903 23:45:43.222027  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:43.222037  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:43.222048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:43.274015  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:43.274053  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:43.288204  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:43.288237  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:43.352172  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:43.352197  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:43.352214  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:43.429363  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:43.429416  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:45.967138  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:45.984430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:45.984508  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:46.018620  171911 cri.go:89] found id: ""
	I0903 23:45:46.018656  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.018670  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:46.018680  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:46.018736  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:46.052857  171911 cri.go:89] found id: ""
	I0903 23:45:46.052896  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.052908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:46.052917  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:46.052992  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:46.086760  171911 cri.go:89] found id: ""
	I0903 23:45:46.086802  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.086815  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:46.086824  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:46.086897  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:46.122770  171911 cri.go:89] found id: ""
	I0903 23:45:46.122808  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.122821  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:46.122831  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:46.122898  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:46.156632  171911 cri.go:89] found id: ""
	I0903 23:45:46.156666  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.156677  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:46.156684  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:46.156748  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:46.189167  171911 cri.go:89] found id: ""
	I0903 23:45:46.189196  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.189204  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:46.189211  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:46.189281  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:46.221676  171911 cri.go:89] found id: ""
	I0903 23:45:46.221703  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.221710  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:46.221716  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:46.221781  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:46.255950  171911 cri.go:89] found id: ""
	I0903 23:45:46.255989  171911 logs.go:282] 0 containers: []
	W0903 23:45:46.256001  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:46.256012  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:46.256026  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:46.320856  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:46.320887  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:46.320904  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:46.395448  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:46.395495  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:46.433348  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:46.433402  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:46.483558  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:46.483600  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:48.997604  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:49.014515  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:49.014584  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:49.049009  171911 cri.go:89] found id: ""
	I0903 23:45:49.049041  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.049049  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:49.049055  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:49.049107  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:49.082752  171911 cri.go:89] found id: ""
	I0903 23:45:49.082784  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.082792  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:49.082799  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:49.082853  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:49.117820  171911 cri.go:89] found id: ""
	I0903 23:45:49.117851  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.117861  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:49.117869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:49.117937  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:49.152630  171911 cri.go:89] found id: ""
	I0903 23:45:49.152662  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.152673  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:49.152681  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:49.152746  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:49.186660  171911 cri.go:89] found id: ""
	I0903 23:45:49.186693  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.186705  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:49.186715  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:49.186787  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:49.221850  171911 cri.go:89] found id: ""
	I0903 23:45:49.221879  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.221887  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:49.221894  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:49.221947  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:49.256272  171911 cri.go:89] found id: ""
	I0903 23:45:49.256301  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.256309  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:49.256315  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:49.256378  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:49.292385  171911 cri.go:89] found id: ""
	I0903 23:45:49.292414  171911 logs.go:282] 0 containers: []
	W0903 23:45:49.292422  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:49.292432  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:49.292446  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:49.343070  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:49.343109  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:49.356910  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:49.356940  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:49.423437  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:49.423471  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:49.423486  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:49.494062  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:49.494108  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.034573  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:52.051154  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:52.051217  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:52.088178  171911 cri.go:89] found id: ""
	I0903 23:45:52.088205  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.088214  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:52.088222  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:52.088284  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:52.122560  171911 cri.go:89] found id: ""
	I0903 23:45:52.122595  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.122606  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:52.122617  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:52.122687  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:52.154593  171911 cri.go:89] found id: ""
	I0903 23:45:52.154628  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.154636  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:52.154646  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:52.154700  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:52.188028  171911 cri.go:89] found id: ""
	I0903 23:45:52.188066  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.188079  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:52.188088  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:52.188162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:52.223140  171911 cri.go:89] found id: ""
	I0903 23:45:52.223165  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.223172  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:52.223178  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:52.223231  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:52.267817  171911 cri.go:89] found id: ""
	I0903 23:45:52.267851  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.267862  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:52.267869  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:52.267936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:52.302187  171911 cri.go:89] found id: ""
	I0903 23:45:52.302224  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.302236  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:52.302245  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:52.302315  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:52.336716  171911 cri.go:89] found id: ""
	I0903 23:45:52.336742  171911 logs.go:282] 0 containers: []
	W0903 23:45:52.336750  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:52.336761  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:52.336776  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:52.376759  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:52.376793  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:52.424230  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:52.424274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:52.438819  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:52.438850  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:52.505537  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:52.505562  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:52.505577  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.082568  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:55.100018  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:55.100095  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:55.135160  171911 cri.go:89] found id: ""
	I0903 23:45:55.135189  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.135201  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:55.135210  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:55.135268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:55.175763  171911 cri.go:89] found id: ""
	I0903 23:45:55.175800  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.175808  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:55.175814  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:55.175875  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:55.209987  171911 cri.go:89] found id: ""
	I0903 23:45:55.210015  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.210024  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:55.210030  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:55.210090  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:55.244587  171911 cri.go:89] found id: ""
	I0903 23:45:55.244615  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.244623  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:55.244630  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:55.244699  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:55.279333  171911 cri.go:89] found id: ""
	I0903 23:45:55.279363  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.279373  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:55.279381  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:55.279451  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:55.313220  171911 cri.go:89] found id: ""
	I0903 23:45:55.313263  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.313273  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:55.313281  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:55.313355  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:55.348181  171911 cri.go:89] found id: ""
	I0903 23:45:55.348215  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.348224  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:55.348230  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:55.348299  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:55.381456  171911 cri.go:89] found id: ""
	I0903 23:45:55.381482  171911 logs.go:282] 0 containers: []
	W0903 23:45:55.381490  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:55.381500  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:55.381516  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:55.433817  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:55.433856  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:55.447772  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:55.447804  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:55.513762  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:45:55.513795  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:55.513812  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:55.585576  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:55.585615  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.125483  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:45:58.142430  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:45:58.142505  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:45:58.177668  171911 cri.go:89] found id: ""
	I0903 23:45:58.177697  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.177709  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:45:58.177717  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:45:58.177791  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:45:58.212662  171911 cri.go:89] found id: ""
	I0903 23:45:58.212688  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.212697  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:45:58.212705  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:45:58.212766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:45:58.248588  171911 cri.go:89] found id: ""
	I0903 23:45:58.248616  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.248623  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:45:58.248629  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:45:58.248684  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:45:58.283427  171911 cri.go:89] found id: ""
	I0903 23:45:58.283459  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.283468  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:45:58.283475  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:45:58.283537  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:45:58.319164  171911 cri.go:89] found id: ""
	I0903 23:45:58.319195  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.319203  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:45:58.319209  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:45:58.319265  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:45:58.354722  171911 cri.go:89] found id: ""
	I0903 23:45:58.354750  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.354758  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:45:58.354764  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:45:58.354816  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:45:58.389144  171911 cri.go:89] found id: ""
	I0903 23:45:58.389171  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.389181  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:45:58.389187  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:45:58.389240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:45:58.423096  171911 cri.go:89] found id: ""
	I0903 23:45:58.423125  171911 logs.go:282] 0 containers: []
	W0903 23:45:58.423134  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:45:58.423144  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:45:58.423158  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:45:58.500171  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:45:58.500208  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:45:58.538635  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:45:58.538663  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:45:58.584846  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:45:58.584882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:45:58.598653  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:45:58.598685  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:45:58.666401  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:01.168834  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:01.185866  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:01.185953  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:01.219970  171911 cri.go:89] found id: ""
	I0903 23:46:01.219998  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.220006  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:01.220012  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:01.220075  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:01.253640  171911 cri.go:89] found id: ""
	I0903 23:46:01.253673  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.253683  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:01.253691  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:01.253756  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:01.288533  171911 cri.go:89] found id: ""
	I0903 23:46:01.288564  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.288576  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:01.288584  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:01.288655  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:01.323184  171911 cri.go:89] found id: ""
	I0903 23:46:01.323217  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.323226  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:01.323232  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:01.323289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:01.356988  171911 cri.go:89] found id: ""
	I0903 23:46:01.357023  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.357034  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:01.357045  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:01.357106  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:01.390140  171911 cri.go:89] found id: ""
	I0903 23:46:01.390168  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.390176  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:01.390182  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:01.390247  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:01.423178  171911 cri.go:89] found id: ""
	I0903 23:46:01.423207  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.423215  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:01.423222  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:01.423285  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:01.461100  171911 cri.go:89] found id: ""
	I0903 23:46:01.461138  171911 logs.go:282] 0 containers: []
	W0903 23:46:01.461148  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:01.461160  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:01.461185  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:01.535231  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:01.535274  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:01.574120  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:01.574154  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:01.621782  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:01.621817  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:01.642205  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:01.642246  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:01.707505  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.207758  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:04.225090  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:04.225162  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:04.259542  171911 cri.go:89] found id: ""
	I0903 23:46:04.259573  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.259580  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:04.259586  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:04.259638  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:04.294395  171911 cri.go:89] found id: ""
	I0903 23:46:04.294422  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.294430  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:04.294436  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:04.294488  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:04.329086  171911 cri.go:89] found id: ""
	I0903 23:46:04.329125  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.329134  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:04.329140  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:04.329194  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:04.362247  171911 cri.go:89] found id: ""
	I0903 23:46:04.362278  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.362286  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:04.362292  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:04.362348  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:04.397700  171911 cri.go:89] found id: ""
	I0903 23:46:04.397731  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.397739  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:04.397745  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:04.397800  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:04.431332  171911 cri.go:89] found id: ""
	I0903 23:46:04.431360  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.431368  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:04.431374  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:04.431425  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:04.465005  171911 cri.go:89] found id: ""
	I0903 23:46:04.465035  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.465042  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:04.465049  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:04.465108  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:04.500441  171911 cri.go:89] found id: ""
	I0903 23:46:04.500470  171911 logs.go:282] 0 containers: []
	W0903 23:46:04.500478  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:04.500487  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:04.500505  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:04.538356  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:04.538389  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:04.585363  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:04.585412  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:04.602519  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:04.602553  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:04.676451  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:04.676474  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:04.676488  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.260862  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:07.278149  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:07.278214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:07.320356  171911 cri.go:89] found id: ""
	I0903 23:46:07.320393  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.320405  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:07.320412  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:07.320498  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:07.355032  171911 cri.go:89] found id: ""
	I0903 23:46:07.355063  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.355074  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:07.355090  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:07.355155  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:07.391094  171911 cri.go:89] found id: ""
	I0903 23:46:07.391119  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.391129  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:07.391136  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:07.391195  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:07.431946  171911 cri.go:89] found id: ""
	I0903 23:46:07.431979  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.431988  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:07.431994  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:07.432049  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:07.470935  171911 cri.go:89] found id: ""
	I0903 23:46:07.470965  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.470974  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:07.470981  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:07.471035  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:07.507140  171911 cri.go:89] found id: ""
	I0903 23:46:07.507171  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.507179  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:07.507185  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:07.507243  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:07.542978  171911 cri.go:89] found id: ""
	I0903 23:46:07.543007  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.543014  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:07.543022  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:07.543083  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:07.578836  171911 cri.go:89] found id: ""
	I0903 23:46:07.578867  171911 logs.go:282] 0 containers: []
	W0903 23:46:07.578875  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:07.578885  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:07.578911  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:07.625808  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:07.625852  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:07.639685  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:07.639719  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:07.705947  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:07.705975  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:07.705994  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:07.782360  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:07.782406  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:10.331295  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:10.348405  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:10.348479  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:10.381149  171911 cri.go:89] found id: ""
	I0903 23:46:10.381178  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.381185  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:10.381192  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:10.381254  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:10.414056  171911 cri.go:89] found id: ""
	I0903 23:46:10.414096  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.414108  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:10.414117  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:10.414174  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:10.449437  171911 cri.go:89] found id: ""
	I0903 23:46:10.449467  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.449478  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:10.449485  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:10.449568  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:10.485019  171911 cri.go:89] found id: ""
	I0903 23:46:10.485047  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.485058  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:10.485064  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:10.485115  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:10.517909  171911 cri.go:89] found id: ""
	I0903 23:46:10.517943  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.517955  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:10.517963  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:10.518037  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:10.551948  171911 cri.go:89] found id: ""
	I0903 23:46:10.551976  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.551984  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:10.551990  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:10.552053  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:10.586008  171911 cri.go:89] found id: ""
	I0903 23:46:10.586042  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.586052  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:10.586060  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:10.586130  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:10.621028  171911 cri.go:89] found id: ""
	I0903 23:46:10.621054  171911 logs.go:282] 0 containers: []
	W0903 23:46:10.621062  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:10.621073  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:10.621122  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:10.670328  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:10.670367  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:10.684168  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:10.684196  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:10.750643  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:10.750664  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:10.750678  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:10.824493  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:10.824545  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.375299  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:13.392043  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:13.392129  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:13.427112  171911 cri.go:89] found id: ""
	I0903 23:46:13.427149  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.427159  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:13.427167  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:13.427240  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:13.462866  171911 cri.go:89] found id: ""
	I0903 23:46:13.462900  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.462908  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:13.462915  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:13.462976  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:13.498341  171911 cri.go:89] found id: ""
	I0903 23:46:13.498372  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.498381  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:13.498387  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:13.498440  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:13.543600  171911 cri.go:89] found id: ""
	I0903 23:46:13.543627  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.543636  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:13.543642  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:13.543696  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:13.578615  171911 cri.go:89] found id: ""
	I0903 23:46:13.578643  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.578651  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:13.578657  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:13.578720  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:13.613164  171911 cri.go:89] found id: ""
	I0903 23:46:13.613190  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.613197  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:13.613204  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:13.613268  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:13.648193  171911 cri.go:89] found id: ""
	I0903 23:46:13.648219  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.648227  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:13.648235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:13.648289  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:13.692585  171911 cri.go:89] found id: ""
	I0903 23:46:13.692611  171911 logs.go:282] 0 containers: []
	W0903 23:46:13.692619  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:13.692630  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:13.692649  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:13.709447  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:13.709475  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:13.787419  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:13.787450  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:13.787466  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:13.876087  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:13.876121  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:13.922854  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:13.922882  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.471424  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:16.489172  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:16.489260  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:16.523832  171911 cri.go:89] found id: ""
	I0903 23:46:16.523860  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.523867  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:16.523884  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:16.523938  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:16.561012  171911 cri.go:89] found id: ""
	I0903 23:46:16.561043  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.561051  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:16.561057  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:16.561112  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:16.595123  171911 cri.go:89] found id: ""
	I0903 23:46:16.595149  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.595156  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:16.595161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:16.595214  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:16.629844  171911 cri.go:89] found id: ""
	I0903 23:46:16.629879  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.629887  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:16.629893  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:16.629946  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:16.665052  171911 cri.go:89] found id: ""
	I0903 23:46:16.665081  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.665089  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:16.665103  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:16.665176  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:16.699559  171911 cri.go:89] found id: ""
	I0903 23:46:16.699591  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.699599  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:16.699607  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:16.699670  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:16.734191  171911 cri.go:89] found id: ""
	I0903 23:46:16.734221  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.734229  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:16.734235  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:16.734328  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:16.770088  171911 cri.go:89] found id: ""
	I0903 23:46:16.770117  171911 logs.go:282] 0 containers: []
	W0903 23:46:16.770125  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:16.770135  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:16.770150  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:16.818779  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:16.818821  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:16.833000  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:16.833028  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:16.896259  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:16.896283  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:16.896301  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:16.973287  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:16.973330  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:19.513618  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:19.533892  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:19.533986  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:19.575679  171911 cri.go:89] found id: ""
	I0903 23:46:19.575712  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.575722  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:19.575731  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:19.575803  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:19.623477  171911 cri.go:89] found id: ""
	I0903 23:46:19.623509  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.623517  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:19.623524  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:19.623592  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:19.663676  171911 cri.go:89] found id: ""
	I0903 23:46:19.663709  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.663718  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:19.663725  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:19.663792  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:19.698413  171911 cri.go:89] found id: ""
	I0903 23:46:19.698457  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.698466  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:19.698473  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:19.698545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:19.734009  171911 cri.go:89] found id: ""
	I0903 23:46:19.734043  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.734051  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:19.734057  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:19.734124  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:19.770645  171911 cri.go:89] found id: ""
	I0903 23:46:19.770674  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.770682  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:19.770688  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:19.770749  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:19.805002  171911 cri.go:89] found id: ""
	I0903 23:46:19.805039  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.805051  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:19.805062  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:19.805134  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:19.839613  171911 cri.go:89] found id: ""
	I0903 23:46:19.839649  171911 logs.go:282] 0 containers: []
	W0903 23:46:19.839659  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:19.839672  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:19.839687  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:19.892825  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:19.892868  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:19.907172  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:19.907215  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:19.972520  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:19.972549  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:19.972563  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:20.047246  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:20.047313  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:22.586936  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:22.603850  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:22.603927  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:22.638907  171911 cri.go:89] found id: ""
	I0903 23:46:22.638936  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.638945  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:22.638954  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:22.639025  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:22.674519  171911 cri.go:89] found id: ""
	I0903 23:46:22.674550  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.674557  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:22.674563  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:22.674623  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:22.709223  171911 cri.go:89] found id: ""
	I0903 23:46:22.709256  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.709267  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:22.709274  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:22.709343  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:22.744699  171911 cri.go:89] found id: ""
	I0903 23:46:22.744732  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.744742  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:22.744748  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:22.744801  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:22.780192  171911 cri.go:89] found id: ""
	I0903 23:46:22.780226  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.780234  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:22.780240  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:22.780296  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:22.814575  171911 cri.go:89] found id: ""
	I0903 23:46:22.814606  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.814615  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:22.814621  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:22.814674  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:22.851385  171911 cri.go:89] found id: ""
	I0903 23:46:22.851415  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.851423  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:22.851429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:22.851480  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:22.884676  171911 cri.go:89] found id: ""
	I0903 23:46:22.884705  171911 logs.go:282] 0 containers: []
	W0903 23:46:22.884713  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:22.884723  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:22.884734  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:22.935185  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:22.935223  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:22.949406  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:22.949442  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:23.012847  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:23.012877  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:23.012895  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:23.084409  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:23.084455  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:25.631753  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:25.651358  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:25.651431  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:25.685485  171911 cri.go:89] found id: ""
	I0903 23:46:25.685514  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.685523  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:25.685528  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:25.685591  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:25.720765  171911 cri.go:89] found id: ""
	I0903 23:46:25.720796  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.720804  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:25.720810  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:25.720867  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:25.754626  171911 cri.go:89] found id: ""
	I0903 23:46:25.754659  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.754670  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:25.754678  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:25.754731  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:25.789362  171911 cri.go:89] found id: ""
	I0903 23:46:25.789411  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.789421  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:25.789429  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:25.789497  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:25.826469  171911 cri.go:89] found id: ""
	I0903 23:46:25.826502  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.826511  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:25.826519  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:25.826582  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:25.861006  171911 cri.go:89] found id: ""
	I0903 23:46:25.861045  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.861057  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:25.861066  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:25.861141  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:25.895640  171911 cri.go:89] found id: ""
	I0903 23:46:25.895676  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.895687  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:25.895696  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:25.895766  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:25.930858  171911 cri.go:89] found id: ""
	I0903 23:46:25.930886  171911 logs.go:282] 0 containers: []
	W0903 23:46:25.930894  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:25.930903  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:25.930917  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:25.945023  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:25.945048  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:26.011367  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:26.011401  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:26.011419  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:26.088648  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:26.088697  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:26.127560  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:26.127595  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:28.679659  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:28.696950  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:46:28.697030  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:46:28.730995  171911 cri.go:89] found id: ""
	I0903 23:46:28.731026  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.731039  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:46:28.731047  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:46:28.731121  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:46:28.765348  171911 cri.go:89] found id: ""
	I0903 23:46:28.765377  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.765396  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:46:28.765404  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:46:28.765471  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:46:28.801427  171911 cri.go:89] found id: ""
	I0903 23:46:28.801459  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.801470  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:46:28.801478  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:46:28.801545  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:46:28.836740  171911 cri.go:89] found id: ""
	I0903 23:46:28.836766  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.836775  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:46:28.836781  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:46:28.836865  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:46:28.872484  171911 cri.go:89] found id: ""
	I0903 23:46:28.872517  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.872528  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:46:28.872538  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:46:28.872619  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:46:28.906796  171911 cri.go:89] found id: ""
	I0903 23:46:28.906840  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.906854  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:46:28.906864  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:46:28.906936  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:46:28.941330  171911 cri.go:89] found id: ""
	I0903 23:46:28.941359  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.941367  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:46:28.941373  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:46:28.941447  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:46:28.975273  171911 cri.go:89] found id: ""
	I0903 23:46:28.975304  171911 logs.go:282] 0 containers: []
	W0903 23:46:28.975316  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:46:28.975328  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:46:28.975351  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:46:29.013344  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:46:29.013374  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:46:29.062906  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:46:29.062943  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:46:29.077068  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:46:29.077094  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:46:29.141017  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:46:29.141041  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:46:29.141059  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0903 23:46:31.720110  171911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:46:31.737478  171911 kubeadm.go:593] duration metric: took 4m4.418875365s to restartPrimaryControlPlane
	W0903 23:46:31.737562  171911 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0903 23:46:31.737592  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:46:36.182110  171911 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.444484741s)
	I0903 23:46:36.182205  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:46:36.197763  171911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:46:36.209295  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:46:36.220561  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:46:36.220584  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:46:36.220630  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:46:36.231194  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:46:36.231261  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:46:36.242263  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:46:36.252204  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:46:36.252278  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:46:36.263654  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.274160  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:46:36.274216  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:46:36.285535  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:46:36.296495  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:46:36.296566  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:46:36.308036  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:46:36.376723  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:46:36.376807  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:46:36.507237  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:46:36.507356  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:46:36.507451  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:46:36.676775  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:46:36.678771  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:46:36.678910  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:46:36.679002  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:46:36.679121  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:46:36.679204  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:46:36.679317  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:46:36.679385  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:46:36.679592  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:46:36.680075  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:46:36.680443  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:46:36.680690  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:46:36.680741  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:46:36.680801  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:46:37.040729  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:46:37.327107  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:46:37.592932  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:46:37.842405  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:46:37.860457  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:46:37.861477  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:46:37.861541  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:46:38.009088  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:46:38.010918  171911 out.go:252]   - Booting up control plane ...
	I0903 23:46:38.011062  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:46:38.018027  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:46:38.018106  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:46:38.018634  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:46:38.023296  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:47:18.025738  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:47:18.026296  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:18.026552  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:23.027174  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:23.027478  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:33.028031  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:33.028314  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:47:53.028650  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:47:53.028911  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031053  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:48:33.031367  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:48:33.031406  171911 kubeadm.go:310] 
	I0903 23:48:33.031457  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:48:33.031522  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:48:33.031531  171911 kubeadm.go:310] 
	I0903 23:48:33.031571  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:48:33.031621  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:48:33.031747  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:48:33.031758  171911 kubeadm.go:310] 
	I0903 23:48:33.031898  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:48:33.031946  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:48:33.032002  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:48:33.032011  171911 kubeadm.go:310] 
	I0903 23:48:33.032171  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:48:33.032298  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:48:33.032308  171911 kubeadm.go:310] 
	I0903 23:48:33.032463  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:48:33.032612  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:48:33.032693  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:48:33.032780  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:48:33.032797  171911 kubeadm.go:310] 
	I0903 23:48:33.033539  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:48:33.033643  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:48:33.033735  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0903 23:48:33.033908  171911 out.go:285] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0903 23:48:33.033966  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0903 23:48:33.484811  171911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:48:33.501986  171911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:48:33.513610  171911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:48:33.513635  171911 kubeadm.go:157] found existing configuration files:
	
	I0903 23:48:33.513694  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:48:33.524062  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:48:33.524128  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:48:33.534922  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:48:33.544314  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:48:33.544364  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:48:33.555345  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.565515  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:48:33.565578  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:48:33.576111  171911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:48:33.586276  171911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:48:33.586335  171911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:48:33.597298  171911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0903 23:48:33.791164  171911 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:50:29.735983  171911 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0903 23:50:29.736108  171911 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0903 23:50:29.738473  171911 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0903 23:50:29.738539  171911 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:50:29.738632  171911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:50:29.738777  171911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:50:29.738908  171911 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0903 23:50:29.738994  171911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:50:29.740823  171911 out.go:252]   - Generating certificates and keys ...
	I0903 23:50:29.740897  171911 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:50:29.740956  171911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:50:29.741026  171911 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0903 23:50:29.741099  171911 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0903 23:50:29.741175  171911 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0903 23:50:29.741225  171911 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0903 23:50:29.741281  171911 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0903 23:50:29.741336  171911 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0903 23:50:29.741423  171911 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0903 23:50:29.741518  171911 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0903 23:50:29.741593  171911 kubeadm.go:310] [certs] Using the existing "sa" key
	I0903 23:50:29.741669  171911 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:50:29.741746  171911 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:50:29.741831  171911 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:50:29.741921  171911 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:50:29.742004  171911 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:50:29.742142  171911 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:50:29.742267  171911 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:50:29.742339  171911 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:50:29.742442  171911 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:50:29.744016  171911 out.go:252]   - Booting up control plane ...
	I0903 23:50:29.744169  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:50:29.744283  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:50:29.744364  171911 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:50:29.744481  171911 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:50:29.744722  171911 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0903 23:50:29.744772  171911 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0903 23:50:29.744856  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745144  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745256  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745481  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745588  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.745791  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.745882  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746079  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746151  171911 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0903 23:50:29.746327  171911 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0903 23:50:29.746336  171911 kubeadm.go:310] 
	I0903 23:50:29.746385  171911 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0903 23:50:29.746439  171911 kubeadm.go:310] 		timed out waiting for the condition
	I0903 23:50:29.746449  171911 kubeadm.go:310] 
	I0903 23:50:29.746505  171911 kubeadm.go:310] 	This error is likely caused by:
	I0903 23:50:29.746554  171911 kubeadm.go:310] 		- The kubelet is not running
	I0903 23:50:29.746678  171911 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0903 23:50:29.746686  171911 kubeadm.go:310] 
	I0903 23:50:29.746808  171911 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0903 23:50:29.746856  171911 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0903 23:50:29.746908  171911 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0903 23:50:29.746918  171911 kubeadm.go:310] 
	I0903 23:50:29.747078  171911 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0903 23:50:29.747201  171911 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0903 23:50:29.747208  171911 kubeadm.go:310] 
	I0903 23:50:29.747368  171911 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0903 23:50:29.747487  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0903 23:50:29.747603  171911 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0903 23:50:29.747684  171911 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0903 23:50:29.747736  171911 kubeadm.go:310] 
	I0903 23:50:29.747765  171911 kubeadm.go:394] duration metric: took 8m2.477240692s to StartCluster
	I0903 23:50:29.747828  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0903 23:50:29.747896  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0903 23:50:29.786098  171911 cri.go:89] found id: ""
	I0903 23:50:29.786144  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.786162  171911 logs.go:284] No container was found matching "kube-apiserver"
	I0903 23:50:29.786169  171911 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0903 23:50:29.786251  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0903 23:50:29.819064  171911 cri.go:89] found id: ""
	I0903 23:50:29.819095  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.819103  171911 logs.go:284] No container was found matching "etcd"
	I0903 23:50:29.819109  171911 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0903 23:50:29.819164  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0903 23:50:29.853192  171911 cri.go:89] found id: ""
	I0903 23:50:29.853223  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.853247  171911 logs.go:284] No container was found matching "coredns"
	I0903 23:50:29.853255  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0903 23:50:29.853324  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0903 23:50:29.885949  171911 cri.go:89] found id: ""
	I0903 23:50:29.885979  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.885991  171911 logs.go:284] No container was found matching "kube-scheduler"
	I0903 23:50:29.885999  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0903 23:50:29.886051  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0903 23:50:29.920423  171911 cri.go:89] found id: ""
	I0903 23:50:29.920451  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.920458  171911 logs.go:284] No container was found matching "kube-proxy"
	I0903 23:50:29.920464  171911 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0903 23:50:29.920516  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0903 23:50:29.955106  171911 cri.go:89] found id: ""
	I0903 23:50:29.955142  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.955153  171911 logs.go:284] No container was found matching "kube-controller-manager"
	I0903 23:50:29.955161  171911 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0903 23:50:29.955241  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0903 23:50:29.988125  171911 cri.go:89] found id: ""
	I0903 23:50:29.988151  171911 logs.go:282] 0 containers: []
	W0903 23:50:29.988159  171911 logs.go:284] No container was found matching "kindnet"
	I0903 23:50:29.988166  171911 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0903 23:50:29.988220  171911 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0903 23:50:30.022768  171911 cri.go:89] found id: ""
	I0903 23:50:30.022795  171911 logs.go:282] 0 containers: []
	W0903 23:50:30.022803  171911 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0903 23:50:30.022813  171911 logs.go:123] Gathering logs for container status ...
	I0903 23:50:30.022828  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0903 23:50:30.059016  171911 logs.go:123] Gathering logs for kubelet ...
	I0903 23:50:30.059049  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0903 23:50:30.108030  171911 logs.go:123] Gathering logs for dmesg ...
	I0903 23:50:30.108065  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0903 23:50:30.121879  171911 logs.go:123] Gathering logs for describe nodes ...
	I0903 23:50:30.121906  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0903 23:50:30.190324  171911 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0903 23:50:30.190349  171911 logs.go:123] Gathering logs for CRI-O ...
	I0903 23:50:30.190362  171911 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0903 23:50:30.296724  171911 out.go:434] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0903 23:50:30.296816  171911 out.go:285] * 
	W0903 23:50:30.296931  171911 out.go:285] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.296951  171911 out.go:285] * 
	W0903 23:50:30.299691  171911 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0903 23:50:30.303743  171911 out.go:203] 
	W0903 23:50:30.304964  171911 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0903 23:50:30.305026  171911 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0903 23:50:30.305059  171911 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0903 23:50:30.306733  171911 out.go:203] 
	
	
	==> CRI-O <==
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.033139257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756944319033118702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1db0fb59-b00c-4d33-b1b7-d2f128f72e5b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.033835725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=441755e1-5c59-444d-bf02-5e004efe78aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.033896743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=441755e1-5c59-444d-bf02-5e004efe78aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.033928270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=441755e1-5c59-444d-bf02-5e004efe78aa name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.064416847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70d285b8-8e87-483e-81ab-abd20e4f0610 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.064629907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70d285b8-8e87-483e-81ab-abd20e4f0610 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.065968777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5b17ccb-af02-4eb4-b1ca-2da88c7c25bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.066448347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756944319066429752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5b17ccb-af02-4eb4-b1ca-2da88c7c25bc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.066942064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ebfbd6c-f4dc-4115-989d-15cf371ec2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.067015725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ebfbd6c-f4dc-4115-989d-15cf371ec2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.067050726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1ebfbd6c-f4dc-4115-989d-15cf371ec2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.098106792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc008bc7-3063-4fcc-9b75-08f731fb4416 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.098187295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc008bc7-3063-4fcc-9b75-08f731fb4416 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.099380823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e41d72e0-f56e-4d78-96a5-974773a5818a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.099760087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756944319099743051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e41d72e0-f56e-4d78-96a5-974773a5818a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.100509704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51f9bca9-2f47-4c94-87a4-c83146290e3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.100566890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51f9bca9-2f47-4c94-87a4-c83146290e3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.100599397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=51f9bca9-2f47-4c94-87a4-c83146290e3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.132823796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93ec1af8-f027-476b-a09e-04f160bddd23 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.132905812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93ec1af8-f027-476b-a09e-04f160bddd23 name=/runtime.v1.RuntimeService/Version
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.134056821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e9b7c24-a494-4c6c-b2da-d4002fe3c64b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.134452679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1756944319134433492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e9b7c24-a494-4c6c-b2da-d4002fe3c64b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.134883720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8fbab32-a668-4a2b-8c9e-55de574f96c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.135033316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8fbab32-a668-4a2b-8c9e-55de574f96c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 04 00:05:19 old-k8s-version-335468 crio[804]: time="2025-09-04 00:05:19.135167407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b8fbab32-a668-4a2b-8c9e-55de574f96c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 3 23:42] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002453] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.031954] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108082] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.035422] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 3 23:48] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:05:19 up 23 min,  0 users,  load average: 0.03, 0.03, 0.01
	Linux old-k8s-version-335468 6.6.95 #1 SMP PREEMPT_DYNAMIC Sat Jul 26 03:21:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d81c0, 0xc000d53350, 0x1, 0x0, 0x0)
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00092ca80)
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: goroutine 146 [runnable]:
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: runtime.Gosched(...)
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /usr/local/go/src/runtime/proc.go:271
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000d54a20, 0x0, 0x0)
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00092ca80)
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 04 00:05:15 old-k8s-version-335468 kubelet[8762]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 04 00:05:15 old-k8s-version-335468 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 04 00:05:15 old-k8s-version-335468 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 04 00:05:16 old-k8s-version-335468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 174.
	Sep 04 00:05:16 old-k8s-version-335468 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 04 00:05:16 old-k8s-version-335468 kubelet[8772]: I0904 00:05:16.124098    8772 server.go:416] Version: v1.20.0
	Sep 04 00:05:16 old-k8s-version-335468 kubelet[8772]: I0904 00:05:16.124422    8772 server.go:837] Client rotation is on, will bootstrap in background
	Sep 04 00:05:16 old-k8s-version-335468 kubelet[8772]: I0904 00:05:16.126144    8772 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 04 00:05:16 old-k8s-version-335468 kubelet[8772]: I0904 00:05:16.127030    8772 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 04 00:05:16 old-k8s-version-335468 kubelet[8772]: W0904 00:05:16.127055    8772 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 2 (235.675056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "old-k8s-version-335468" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (345.75s)

                                                
                                    

Test pass (272/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.6
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 13.81
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.66
22 TestOffline 65.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.4
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 11.49
35 TestAddons/parallel/Registry 16.01
36 TestAddons/parallel/RegistryCreds 0.79
38 TestAddons/parallel/InspektorGadget 5.72
39 TestAddons/parallel/MetricsServer 6.12
41 TestAddons/parallel/CSI 57.52
42 TestAddons/parallel/Headlamp 20.94
43 TestAddons/parallel/CloudSpanner 6.57
44 TestAddons/parallel/LocalPath 55.29
45 TestAddons/parallel/NvidiaDevicePlugin 6.73
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 91.25
49 TestCertOptions 48.79
50 TestCertExpiration 301.65
52 TestForceSystemdFlag 76.65
53 TestForceSystemdEnv 48.5
55 TestKVMDriverInstallOrUpdate 1.97
59 TestErrorSpam/setup 41.25
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.68
63 TestErrorSpam/unpause 1.81
64 TestErrorSpam/stop 5.38
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 63.28
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.97
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.7
76 TestFunctional/serial/CacheCmd/cache/add_local 2.57
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 28.14
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.33
87 TestFunctional/serial/LogsFileCmd 1.34
88 TestFunctional/serial/InvalidService 4.03
90 TestFunctional/parallel/ConfigCmd 0.41
91 TestFunctional/parallel/DashboardCmd 11.03
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.11
98 TestFunctional/parallel/ServiceCmdConnect 12.56
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 42.47
102 TestFunctional/parallel/SSHCmd 0.43
103 TestFunctional/parallel/CpCmd 1.53
104 TestFunctional/parallel/MySQL 30.31
105 TestFunctional/parallel/FileSync 0.22
106 TestFunctional/parallel/CertSync 1.41
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
114 TestFunctional/parallel/License 0.81
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.24
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.81
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.5
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
122 TestFunctional/parallel/ImageCommands/ImageBuild 9.4
123 TestFunctional/parallel/ImageCommands/Setup 1.72
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.16
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.1
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.63
128 TestFunctional/parallel/ServiceCmd/List 0.47
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
133 TestFunctional/parallel/ServiceCmd/Format 0.32
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
149 TestFunctional/parallel/MountCmd/any-port 14.58
150 TestFunctional/parallel/ProfileCmd/profile_list 0.39
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
152 TestFunctional/parallel/MountCmd/specific-port 1.8
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
161 TestMultiControlPlane/serial/StartCluster 221.03
162 TestMultiControlPlane/serial/DeployApp 6.63
163 TestMultiControlPlane/serial/PingHostFromPods 1.13
164 TestMultiControlPlane/serial/AddWorkerNode 52.77
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
167 TestMultiControlPlane/serial/CopyFile 13.32
168 TestMultiControlPlane/serial/StopSecondaryNode 91.47
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.62
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 410.4
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.62
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
175 TestMultiControlPlane/serial/StopCluster 272.53
176 TestMultiControlPlane/serial/RestartCluster 117.67
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 82.92
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
183 TestJSONOutput/start/Command 61.25
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.33
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 92.39
215 TestMountStart/serial/StartWithMountFirst 27.92
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 29.2
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.66
222 TestMountStart/serial/RestartStopped 23.69
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 110.15
227 TestMultiNode/serial/DeployApp2Nodes 6.17
228 TestMultiNode/serial/PingHostFrom2Pods 0.74
229 TestMultiNode/serial/AddNode 50.42
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.43
233 TestMultiNode/serial/StopNode 2.51
234 TestMultiNode/serial/StartAfterStop 38.98
235 TestMultiNode/serial/RestartKeepsNodes 327.96
236 TestMultiNode/serial/DeleteNode 2.77
237 TestMultiNode/serial/StopMultiNode 182.08
238 TestMultiNode/serial/RestartMultiNode 102.39
239 TestMultiNode/serial/ValidateNameConflict 47.35
246 TestScheduledStopUnix 117.8
250 TestRunningBinaryUpgrade 153.51
254 TestStoppedBinaryUpgrade/Setup 2.83
258 TestStoppedBinaryUpgrade/Upgrade 237.41
263 TestNetworkPlugins/group/false 3.18
275 TestPause/serial/Start 75.27
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
280 TestNoKubernetes/serial/StartWithK8s 54.68
281 TestNoKubernetes/serial/StartWithStopK8s 29.96
282 TestNoKubernetes/serial/Start 44.75
283 TestNetworkPlugins/group/auto/Start 93.59
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
285 TestNoKubernetes/serial/ProfileList 1.56
286 TestNoKubernetes/serial/Stop 1.33
287 TestNoKubernetes/serial/StartNoArgs 47.07
288 TestNetworkPlugins/group/kindnet/Start 65.75
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
290 TestNetworkPlugins/group/calico/Start 94.76
291 TestNetworkPlugins/group/auto/KubeletFlags 0.23
292 TestNetworkPlugins/group/auto/NetCatPod 9.24
293 TestNetworkPlugins/group/auto/DNS 0.15
294 TestNetworkPlugins/group/auto/Localhost 0.11
295 TestNetworkPlugins/group/auto/HairPin 0.11
296 TestNetworkPlugins/group/custom-flannel/Start 96.25
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
299 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
300 TestNetworkPlugins/group/bridge/Start 75.74
301 TestNetworkPlugins/group/kindnet/DNS 0.15
302 TestNetworkPlugins/group/kindnet/Localhost 0.14
303 TestNetworkPlugins/group/kindnet/HairPin 0.12
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/flannel/Start 89.11
306 TestNetworkPlugins/group/calico/KubeletFlags 0.21
307 TestNetworkPlugins/group/calico/NetCatPod 10.22
308 TestNetworkPlugins/group/calico/DNS 0.2
309 TestNetworkPlugins/group/calico/Localhost 0.17
310 TestNetworkPlugins/group/calico/HairPin 0.16
311 TestNetworkPlugins/group/enable-default-cni/Start 70.55
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
314 TestNetworkPlugins/group/custom-flannel/DNS 0.15
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
317 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
318 TestNetworkPlugins/group/bridge/NetCatPod 11.32
319 TestNetworkPlugins/group/bridge/DNS 0.17
320 TestNetworkPlugins/group/bridge/Localhost 0.15
321 TestNetworkPlugins/group/bridge/HairPin 0.14
325 TestStartStop/group/no-preload/serial/FirstStart 86.14
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
328 TestNetworkPlugins/group/flannel/NetCatPod 14.32
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
331 TestNetworkPlugins/group/flannel/DNS 0.13
332 TestNetworkPlugins/group/flannel/Localhost 0.11
333 TestNetworkPlugins/group/flannel/HairPin 0.11
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
338 TestStartStop/group/embed-certs/serial/FirstStart 60.66
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.24
341 TestStartStop/group/no-preload/serial/DeployApp 10.34
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
343 TestStartStop/group/no-preload/serial/Stop 90.85
344 TestStartStop/group/embed-certs/serial/DeployApp 10.28
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
346 TestStartStop/group/embed-certs/serial/Stop 91.34
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.4
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/no-preload/serial/SecondStart 56.3
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
353 TestStartStop/group/embed-certs/serial/SecondStart 50.67
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
355 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.29
356 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
357 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6
359 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
361 TestStartStop/group/no-preload/serial/Pause 3.65
364 TestStartStop/group/newest-cni/serial/FirstStart 50.33
365 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
366 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
367 TestStartStop/group/embed-certs/serial/Pause 3
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.87
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
374 TestStartStop/group/newest-cni/serial/Stop 10.34
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
376 TestStartStop/group/newest-cni/serial/SecondStart 36.68
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
380 TestStartStop/group/newest-cni/serial/Pause 2.74
381 TestStartStop/group/old-k8s-version/serial/Stop 5.3
382 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
x
+
TestDownloadOnly/v1.20.0/json-events (25.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-945548 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-945548 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.599547438s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0903 22:27:18.668023  113288 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0903 22:27:18.668125  113288 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-945548
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-945548: exit status 85 (59.312658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-945548 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-945548 │ jenkins │ v1.36.0 │ 03 Sep 25 22:26 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:26:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:26:53.109090  113300 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:26:53.109495  113300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:26:53.109549  113300 out.go:374] Setting ErrFile to fd 2...
	I0903 22:26:53.109567  113300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:26:53.110069  113300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	W0903 22:26:53.110403  113300 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21341-109162/.minikube/config/config.json: open /home/jenkins/minikube-integration/21341-109162/.minikube/config/config.json: no such file or directory
	I0903 22:26:53.111238  113300 out.go:368] Setting JSON to true
	I0903 22:26:53.112069  113300 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4157,"bootTime":1756934256,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 22:26:53.112156  113300 start.go:140] virtualization: kvm guest
	I0903 22:26:53.114032  113300 out.go:99] [download-only-945548] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	W0903 22:26:53.114137  113300 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball: no such file or directory
	I0903 22:26:53.114209  113300 notify.go:220] Checking for updates...
	I0903 22:26:53.115184  113300 out.go:171] MINIKUBE_LOCATION=21341
	I0903 22:26:53.116255  113300 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:26:53.117288  113300 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:26:53.118422  113300 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:26:53.119532  113300 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0903 22:26:53.121451  113300 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 22:26:53.121647  113300 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:26:53.219641  113300 out.go:99] Using the kvm2 driver based on user configuration
	I0903 22:26:53.219669  113300 start.go:304] selected driver: kvm2
	I0903 22:26:53.219674  113300 start.go:918] validating driver "kvm2" against <nil>
	I0903 22:26:53.220021  113300 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:26:53.220143  113300 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0903 22:26:53.224683  113300 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0903 22:26:53.225937  113300 out.go:99] Downloading driver docker-machine-driver-kvm2:
	I0903 22:26:53.226031  113300 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.36.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:26:54.425844  113300 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:26:54.426394  113300 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0903 22:26:54.426550  113300 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 22:26:54.426586  113300 cni.go:84] Creating CNI manager for ""
	I0903 22:26:54.426635  113300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 22:26:54.426646  113300 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 22:26:54.426721  113300 start.go:348] cluster config:
	{Name:download-only-945548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-945548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:26:54.426927  113300 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:26:54.428787  113300 out.go:99] Downloading VM boot image ...
	I0903 22:26:54.428820  113300 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21341-109162/.minikube/cache/iso/amd64/minikube-v1.36.0-1753487480-21147-amd64.iso
	I0903 22:27:04.599723  113300 out.go:99] Starting "download-only-945548" primary control-plane node in "download-only-945548" cluster
	I0903 22:27:04.599755  113300 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 22:27:04.697870  113300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 22:27:04.697911  113300 cache.go:58] Caching tarball of preloaded images
	I0903 22:27:04.698067  113300 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 22:27:04.699778  113300 out.go:99] Downloading Kubernetes v1.20.0 preload ...
	I0903 22:27:04.699794  113300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0903 22:27:04.796939  113300 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0903 22:27:16.877668  113300 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0903 22:27:16.877758  113300 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0903 22:27:17.895400  113300 cache.go:61] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0903 22:27:17.895749  113300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/download-only-945548/config.json ...
	I0903 22:27:17.895779  113300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/download-only-945548/config.json: {Name:mk5b7cb6a58bc11d2cb65f08e6cd79fd1ecd2246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 22:27:17.895942  113300 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 22:27:17.896123  113300 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21341-109162/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-945548 host does not exist
	  To start a cluster, run: "minikube start -p download-only-945548"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-945548
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (13.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-462504 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-462504 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.808627954s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (13.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0903 22:27:32.799555  113288 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0903 22:27:32.799613  113288 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-462504
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-462504: exit status 85 (58.256324ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-945548 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-945548 │ jenkins │ v1.36.0 │ 03 Sep 25 22:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │ 03 Sep 25 22:27 UTC │
	│ delete  │ -p download-only-945548                                                                                                                                                 │ download-only-945548 │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │ 03 Sep 25 22:27 UTC │
	│ start   │ -o=json --download-only -p download-only-462504 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-462504 │ jenkins │ v1.36.0 │ 03 Sep 25 22:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 22:27:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 22:27:19.031566  113546 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:27:19.031650  113546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:27:19.031655  113546 out.go:374] Setting ErrFile to fd 2...
	I0903 22:27:19.031659  113546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:27:19.031850  113546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:27:19.032410  113546 out.go:368] Setting JSON to true
	I0903 22:27:19.033221  113546 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4183,"bootTime":1756934256,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 22:27:19.033336  113546 start.go:140] virtualization: kvm guest
	I0903 22:27:19.035440  113546 out.go:99] [download-only-462504] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 22:27:19.035571  113546 notify.go:220] Checking for updates...
	I0903 22:27:19.036661  113546 out.go:171] MINIKUBE_LOCATION=21341
	I0903 22:27:19.037997  113546 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:27:19.039186  113546 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:27:19.040199  113546 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:27:19.041123  113546 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0903 22:27:19.042864  113546 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 22:27:19.043068  113546 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:27:19.074153  113546 out.go:99] Using the kvm2 driver based on user configuration
	I0903 22:27:19.074204  113546 start.go:304] selected driver: kvm2
	I0903 22:27:19.074224  113546 start.go:918] validating driver "kvm2" against <nil>
	I0903 22:27:19.074577  113546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:27:19.074669  113546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21341-109162/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0903 22:27:19.091173  113546 install.go:137] /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2 version is 1.36.0
	I0903 22:27:19.091227  113546 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 22:27:19.091765  113546 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32089MB, container=0MB
	I0903 22:27:19.091898  113546 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 22:27:19.091928  113546 cni.go:84] Creating CNI manager for ""
	I0903 22:27:19.091980  113546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0903 22:27:19.091992  113546 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0903 22:27:19.092046  113546 start.go:348] cluster config:
	{Name:download-only-462504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-462504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:27:19.092153  113546 iso.go:125] acquiring lock: {Name:mk1032fbced3c9e76ba5a04480289e9f07d0eb9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 22:27:19.093637  113546 out.go:99] Starting "download-only-462504" primary control-plane node in "download-only-462504" cluster
	I0903 22:27:19.093658  113546 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 22:27:19.988165  113546 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0903 22:27:19.988249  113546 cache.go:58] Caching tarball of preloaded images
	I0903 22:27:19.988436  113546 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 22:27:19.990049  113546 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0903 22:27:19.990082  113546 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0903 22:27:20.087746  113546 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21341-109162/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-462504 host does not exist
	  To start a cluster, run: "minikube start -p download-only-462504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-462504
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I0903 22:27:33.383718  113288 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-140139 --alsologtostderr --binary-mirror http://127.0.0.1:46845 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-140139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-140139
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (65.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-911470 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-911470 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.982705055s)
helpers_test.go:175: Cleaning up "offline-crio-911470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-911470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-911470: (1.004058772s)
--- PASS: TestOffline (65.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-389176
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-389176: exit status 85 (53.480287ms)

                                                
                                                
-- stdout --
	* Profile "addons-389176" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-389176"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-389176
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-389176: exit status 85 (53.991811ms)

                                                
                                                
-- stdout --
	* Profile "addons-389176" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-389176"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-389176 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-389176 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.394702774s)
--- PASS: TestAddons/Setup (208.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-389176 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-389176 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-389176 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-389176 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f3a05009-9dab-4e77-ae8d-565eb5fedd3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f3a05009-9dab-4e77-ae8d-565eb5fedd3d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003636558s
addons_test.go:694: (dbg) Run:  kubectl --context addons-389176 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-389176 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-389176 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.087805ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rnmbr" [cbdd00d6-7b6a-49e1-a285-71f1c8a40580] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006491293s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7h9df" [0cf8a4d4-8129-4399-9aeb-6c79b6faba16] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00417647s
addons_test.go:392: (dbg) Run:  kubectl --context addons-389176 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-389176 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-389176 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.140826716s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.01s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.306419ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-389176
addons_test.go:332: (dbg) Run:  kubectl --context addons-389176 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bm5l4" [2f9eccea-ea75-4a9b-9fd6-ee1c0042454b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004356703s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.945747ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mz7wt" [1b3ad0b2-9c1c-48e1-b571-fc3871122514] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006721393s
addons_test.go:463: (dbg) Run:  kubectl --context addons-389176 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable metrics-server --alsologtostderr -v=1: (1.029969417s)
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0903 22:31:28.899339  113288 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0903 22:31:28.907647  113288 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0903 22:31:28.907672  113288 kapi.go:107] duration metric: took 8.351724ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.362239ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-389176 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/09/03 22:31:38 [DEBUG] GET http://192.168.39.230:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-389176 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [26bbcc68-abba-40ea-909c-f0f3575ee463] Pending
helpers_test.go:352: "task-pv-pod" [26bbcc68-abba-40ea-909c-f0f3575ee463] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [26bbcc68-abba-40ea-909c-f0f3575ee463] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003544983s
addons_test.go:572: (dbg) Run:  kubectl --context addons-389176 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-389176 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-389176 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-389176 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-389176 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-389176 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-389176 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4b4ab485-a889-43dc-9c65-2b90803eae50] Pending
helpers_test.go:352: "task-pv-pod-restore" [4b4ab485-a889-43dc-9c65-2b90803eae50] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4b4ab485-a889-43dc-9c65-2b90803eae50] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004634546s
addons_test.go:614: (dbg) Run:  kubectl --context addons-389176 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-389176 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-389176 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.887971889s)
--- PASS: TestAddons/parallel/CSI (57.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-389176 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-xv5pm" [f07ffd1f-d7b4-4141-9441-7543220f710c] Pending
helpers_test.go:352: "headlamp-6f46646d79-xv5pm" [f07ffd1f-d7b4-4141-9441-7543220f710c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-xv5pm" [f07ffd1f-d7b4-4141-9441-7543220f710c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004387963s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable headlamp --alsologtostderr -v=1: (6.078840202s)
--- PASS: TestAddons/parallel/Headlamp (20.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-xxf5x" [935b8d9f-3f0b-4778-a813-f2ecf8bf046d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004596316s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-389176 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-389176 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [34159435-60ba-4f2c-8177-81f8a45ffff7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [34159435-60ba-4f2c-8177-81f8a45ffff7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [34159435-60ba-4f2c-8177-81f8a45ffff7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003279097s
addons_test.go:967: (dbg) Run:  kubectl --context addons-389176 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 ssh "cat /opt/local-path-provisioner/pvc-427058f3-6272-436c-9cfd-91031a1fcb72_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-389176 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-389176 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.318606604s)
--- PASS: TestAddons/parallel/LocalPath (55.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-x5kx7" [b7bd9f8c-fc5e-4bb5-91a5-c454cebcabc2] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.028344819s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9vwxt" [c0900370-6eaa-4d09-bf61-42ee912cebf8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003183062s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-389176 addons disable yakd --alsologtostderr -v=1: (5.76391247s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-389176
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-389176: (1m30.970405739s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-389176
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-389176
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-389176
--- PASS: TestAddons/StoppedEnableDisable (91.25s)

                                                
                                    
x
+
TestCertOptions (48.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-161097 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-161097 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.275916619s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-161097 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-161097 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-161097 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-161097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-161097
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-161097: (1.005794982s)
--- PASS: TestCertOptions (48.79s)

                                                
                                    
x
+
TestCertExpiration (301.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-689039 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0903 23:28:59.139053  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-689039 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m7.29042105s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-689039 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-689039 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (53.540110969s)
helpers_test.go:175: Cleaning up "cert-expiration-689039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-689039
--- PASS: TestCertExpiration (301.65s)

                                                
                                    
x
+
TestForceSystemdFlag (76.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-037213 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-037213 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.664624897s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-037213 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-037213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-037213
--- PASS: TestForceSystemdFlag (76.65s)

                                                
                                    
x
+
TestForceSystemdEnv (48.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-753758 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-753758 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.019198631s)
helpers_test.go:175: Cleaning up "force-systemd-env-753758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-753758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-753758: (1.476405598s)
--- PASS: TestForceSystemdEnv (48.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.97s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0903 23:27:57.925322  113288 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0903 23:27:57.925506  113288 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0903 23:27:57.953144  113288 install.go:62] docker-machine-driver-kvm2: exit status 1
W0903 23:27:57.953300  113288 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0903 23:27:57.953359  113288 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1829316809/001/docker-machine-driver-kvm2
I0903 23:27:58.194063  113288 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1829316809/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00057db90 gz:0xc00057db98 tar:0xc00057db30 tar.bz2:0xc00057db40 tar.gz:0xc00057db50 tar.xz:0xc00057db60 tar.zst:0xc00057db70 tbz2:0xc00057db40 tgz:0xc00057db50 txz:0xc00057db60 tzst:0xc00057db70 xz:0xc00057dba0 zip:0xc00057dbc0 zst:0xc00057dba8] Getters:map[file:0xc001e7a220 http:0xc000148eb0 https:0xc000148f00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0903 23:27:58.194110  113288 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1829316809/001/docker-machine-driver-kvm2
I0903 23:27:59.287570  113288 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0903 23:27:59.287659  113288 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0903 23:27:59.316338  113288 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0903 23:27:59.316368  113288 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0903 23:27:59.316430  113288 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0903 23:27:59.316455  113288 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1829316809/002/docker-machine-driver-kvm2
I0903 23:27:59.345930  113288 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1829316809/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440 0x5819440] Decompressors:map[bz2:0xc00057db90 gz:0xc00057db98 tar:0xc00057db30 tar.bz2:0xc00057db40 tar.gz:0xc00057db50 tar.xz:0xc00057db60 tar.zst:0xc00057db70 tbz2:0xc00057db40 tgz:0xc00057db50 txz:0xc00057db60 tzst:0xc00057db70 xz:0xc00057dba0 zip:0xc00057dbc0 zst:0xc00057dba8] Getters:map[file:0xc002338f70 http:0xc002a5ce60 https:0xc002a5ceb0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0903 23:27:59.345974  113288 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1829316809/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.97s)

                                                
                                    
x
+
TestErrorSpam/setup (41.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-290544 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-290544 --driver=kvm2  --container-runtime=crio
E0903 22:36:03.169830  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.176345  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.187698  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.209124  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.250654  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.332201  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.493833  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:03.815582  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:04.457691  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:05.739371  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:08.301125  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:13.423402  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:36:23.665137  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-290544 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-290544 --driver=kvm2  --container-runtime=crio: (41.251827524s)
--- PASS: TestErrorSpam/setup (41.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop: (2.336404913s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop: (1.642980571s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-290544 --log_dir /tmp/nospam-290544 stop: (1.396300399s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21341-109162/.minikube/files/etc/test/nested/copy/113288/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0903 22:36:44.146640  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:37:25.108494  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-381687 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m3.283408998s)
--- PASS: TestFunctional/serial/StartWithProxy (63.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0903 22:37:43.856911  113288 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-381687 --alsologtostderr -v=8: (29.964722807s)
functional_test.go:678: soft start took 29.965691195s for "functional-381687" cluster.
I0903 22:38:13.822037  113288 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (29.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-381687 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:3.1: (1.521829568s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:3.3: (1.597806882s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 cache add registry.k8s.io/pause:latest: (1.576794042s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-381687 /tmp/TestFunctionalserialCacheCmdcacheadd_local3969237338/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache add minikube-local-cache-test:functional-381687
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 cache add minikube-local-cache-test:functional-381687: (2.258869786s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache delete minikube-local-cache-test:functional-381687
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-381687
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (223.28579ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 cache reload: (1.439277064s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 kubectl -- --context functional-381687 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-381687 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (28.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0903 22:38:47.033710  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-381687 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (28.144034083s)
functional_test.go:776: restart took 28.144189808s for "functional-381687" cluster.
I0903 22:38:52.137832  113288 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (28.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-381687 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 logs: (1.332888481s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 logs --file /tmp/TestFunctionalserialLogsFileCmd3521874453/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 logs --file /tmp/TestFunctionalserialLogsFileCmd3521874453/001/logs.txt: (1.338807925s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-381687 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-381687
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-381687: exit status 115 (288.524251ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.241:31616 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-381687 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 config get cpus: exit status 14 (87.068591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 config get cpus: exit status 14 (57.730204ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-381687 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-381687 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 120429: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-381687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (194.657914ms)

                                                
                                                
-- stdout --
	* [functional-381687] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:38:59.554830  120111 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:38:59.555139  120111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:38:59.555163  120111 out.go:374] Setting ErrFile to fd 2...
	I0903 22:38:59.555175  120111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:38:59.555365  120111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:38:59.555884  120111 out.go:368] Setting JSON to false
	I0903 22:38:59.556816  120111 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4884,"bootTime":1756934256,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 22:38:59.556894  120111 start.go:140] virtualization: kvm guest
	I0903 22:38:59.558994  120111 out.go:179] * [functional-381687] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 22:38:59.560315  120111 notify.go:220] Checking for updates...
	I0903 22:38:59.560826  120111 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:38:59.562472  120111 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:38:59.563790  120111 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:38:59.564978  120111 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:38:59.566197  120111 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 22:38:59.567467  120111 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:38:59.569273  120111 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:38:59.569880  120111 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:38:59.569980  120111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:38:59.591607  120111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38659
	I0903 22:38:59.592123  120111 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:38:59.592710  120111 main.go:141] libmachine: Using API Version  1
	I0903 22:38:59.592736  120111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:38:59.593188  120111 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:38:59.593493  120111 main.go:141] libmachine: (functional-381687) Calling .DriverName
	I0903 22:38:59.593797  120111 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:38:59.594226  120111 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:38:59.594284  120111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:38:59.614009  120111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36895
	I0903 22:38:59.614745  120111 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:38:59.615362  120111 main.go:141] libmachine: Using API Version  1
	I0903 22:38:59.615393  120111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:38:59.615793  120111 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:38:59.615978  120111 main.go:141] libmachine: (functional-381687) Calling .DriverName
	I0903 22:38:59.669802  120111 out.go:179] * Using the kvm2 driver based on existing profile
	I0903 22:38:59.671751  120111 start.go:304] selected driver: kvm2
	I0903 22:38:59.671775  120111 start.go:918] validating driver "kvm2" against &{Name:functional-381687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-381687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:38:59.671901  120111 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 22:38:59.680369  120111 out.go:203] 
	W0903 22:38:59.681678  120111 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0903 22:38:59.683073  120111 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-381687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.307583ms)

                                                
                                                
-- stdout --
	* [functional-381687] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:38:59.387179  120057 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:38:59.387374  120057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:38:59.387408  120057 out.go:374] Setting ErrFile to fd 2...
	I0903 22:38:59.387428  120057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:38:59.387967  120057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:38:59.388776  120057 out.go:368] Setting JSON to false
	I0903 22:38:59.390167  120057 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4883,"bootTime":1756934256,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 22:38:59.390258  120057 start.go:140] virtualization: kvm guest
	I0903 22:38:59.392303  120057 out.go:179] * [functional-381687] minikube v1.36.0 sur Ubuntu 20.04 (kvm/amd64)
	I0903 22:38:59.393585  120057 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 22:38:59.393579  120057 notify.go:220] Checking for updates...
	I0903 22:38:59.395464  120057 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 22:38:59.396708  120057 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 22:38:59.397858  120057 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 22:38:59.399745  120057 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 22:38:59.401069  120057 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 22:38:59.402753  120057 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:38:59.403353  120057 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:38:59.403447  120057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:38:59.425236  120057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
	I0903 22:38:59.425801  120057 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:38:59.426366  120057 main.go:141] libmachine: Using API Version  1
	I0903 22:38:59.426391  120057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:38:59.426814  120057 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:38:59.427025  120057 main.go:141] libmachine: (functional-381687) Calling .DriverName
	I0903 22:38:59.427282  120057 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 22:38:59.427627  120057 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:38:59.427711  120057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:38:59.443550  120057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0903 22:38:59.444042  120057 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:38:59.444509  120057 main.go:141] libmachine: Using API Version  1
	I0903 22:38:59.444550  120057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:38:59.445198  120057 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:38:59.445532  120057 main.go:141] libmachine: (functional-381687) Calling .DriverName
	I0903 22:38:59.483400  120057 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0903 22:38:59.484696  120057 start.go:304] selected driver: kvm2
	I0903 22:38:59.484717  120057 start.go:918] validating driver "kvm2" against &{Name:functional-381687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21147/minikube-v1.36.0-1753487480-21147-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.0 ClusterName:functional-381687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 22:38:59.484851  120057 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 22:38:59.487175  120057 out.go:203] 
	W0903 22:38:59.488285  120057 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0903 22:38:59.489293  120057 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-381687 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-381687 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-jx8wq" [013c0240-1be1-4cbd-a462-a54f5e0dd1d9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-jx8wq" [013c0240-1be1-4cbd-a462-a54f5e0dd1d9] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005406617s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.241:30952
functional_test.go:1680: http://192.168.39.241:30952: success! body:
Request served by hello-node-connect-7d85dfc575-jx8wq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.241:30952
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a1d313d9-7124-4a2b-8211-0fbbd84c0390] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003624072s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-381687 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-381687 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-381687 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-381687 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [abb85c24-84b0-431a-a550-47481219bd36] Pending
helpers_test.go:352: "sp-pod" [abb85c24-84b0-431a-a550-47481219bd36] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [abb85c24-84b0-431a-a550-47481219bd36] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003853525s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-381687 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-381687 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-381687 delete -f testdata/storage-provisioner/pod.yaml: (2.292086188s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-381687 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [64c4876c-88c7-45cd-98eb-bc5d400d3fec] Pending
helpers_test.go:352: "sp-pod" [64c4876c-88c7-45cd-98eb-bc5d400d3fec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [64c4876c-88c7-45cd-98eb-bc5d400d3fec] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003623083s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-381687 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "cat /etc/hostname"
2025/09/03 22:39:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh -n functional-381687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cp functional-381687:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1305141005/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh -n functional-381687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh -n functional-381687 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-381687 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-t9z4k" [0e2361ba-c03f-4f33-a8f8-4e523d0b3eb8] Pending
helpers_test.go:352: "mysql-5bb876957f-t9z4k" [0e2361ba-c03f-4f33-a8f8-4e523d0b3eb8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-t9z4k" [0e2361ba-c03f-4f33-a8f8-4e523d0b3eb8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.005031704s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-381687 exec mysql-5bb876957f-t9z4k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/113288/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /etc/test/nested/copy/113288/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/113288.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /etc/ssl/certs/113288.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/113288.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /usr/share/ca-certificates/113288.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1132882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /etc/ssl/certs/1132882.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1132882.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /usr/share/ca-certificates/1132882.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-381687 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "sudo systemctl is-active docker": exit status 1 (243.407545ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "sudo systemctl is-active containerd": exit status 1 (226.826835ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-381687 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-381687 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-nv58n" [9e6936b2-70c1-40a9-80de-4ad2a991a1e0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-nv58n" [9e6936b2-70c1-40a9-80de-4ad2a991a1e0] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006482468s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381687 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-381687
localhost/kicbase/echo-server:functional-381687
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381687 image ls --format short --alsologtostderr:
I0903 22:39:25.407926  121941 out.go:360] Setting OutFile to fd 1 ...
I0903 22:39:25.408217  121941 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:25.408231  121941 out.go:374] Setting ErrFile to fd 2...
I0903 22:39:25.408237  121941 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:25.408522  121941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
I0903 22:39:25.409327  121941 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:25.409499  121941 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:25.410073  121941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:25.410162  121941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:25.425475  121941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
I0903 22:39:25.425958  121941 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:25.426466  121941 main.go:141] libmachine: Using API Version  1
I0903 22:39:25.426493  121941 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:25.426854  121941 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:25.427082  121941 main.go:141] libmachine: (functional-381687) Calling .GetState
I0903 22:39:25.428902  121941 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:25.428951  121941 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:25.443922  121941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
I0903 22:39:25.444444  121941 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:25.445042  121941 main.go:141] libmachine: Using API Version  1
I0903 22:39:25.445074  121941 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:25.445439  121941 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:25.445628  121941 main.go:141] libmachine: (functional-381687) Calling .DriverName
I0903 22:39:25.445813  121941 ssh_runner.go:195] Run: systemctl --version
I0903 22:39:25.445844  121941 main.go:141] libmachine: (functional-381687) Calling .GetSSHHostname
I0903 22:39:25.448665  121941 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:25.449103  121941 main.go:141] libmachine: (functional-381687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:cc:f3", ip: ""} in network mk-functional-381687: {Iface:virbr1 ExpiryTime:2025-09-03 23:36:55 +0000 UTC Type:0 Mac:52:54:00:8a:cc:f3 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:functional-381687 Clientid:01:52:54:00:8a:cc:f3}
I0903 22:39:25.449133  121941 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined IP address 192.168.39.241 and MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:25.449277  121941 main.go:141] libmachine: (functional-381687) Calling .GetSSHPort
I0903 22:39:25.449507  121941 main.go:141] libmachine: (functional-381687) Calling .GetSSHKeyPath
I0903 22:39:25.449688  121941 main.go:141] libmachine: (functional-381687) Calling .GetSSHUsername
I0903 22:39:25.449836  121941 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/functional-381687/id_rsa Username:docker}
I0903 22:39:25.555055  121941 ssh_runner.go:195] Run: sudo crictl images --output json
I0903 22:39:25.848456  121941 main.go:141] libmachine: Making call to close driver server
I0903 22:39:25.848468  121941 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:25.848786  121941 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:25.848804  121941 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:25.848812  121941 main.go:141] libmachine: Making call to close driver server
I0903 22:39:25.848819  121941 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:25.849148  121941 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:25.849180  121941 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381687 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-381687  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/minikube-local-cache-test     │ functional-381687  │ 30f6987407653 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ docker.io/library/nginx                 │ latest             │ ad5708199ec7d │ 197MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381687 image ls --format table --alsologtostderr:
I0903 22:39:29.235082  122433 out.go:360] Setting OutFile to fd 1 ...
I0903 22:39:29.235858  122433 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:29.235914  122433 out.go:374] Setting ErrFile to fd 2...
I0903 22:39:29.235932  122433 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:29.236460  122433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
I0903 22:39:29.237593  122433 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:29.237718  122433 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:29.238057  122433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:29.238107  122433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:29.254311  122433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
I0903 22:39:29.254852  122433 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:29.255513  122433 main.go:141] libmachine: Using API Version  1
I0903 22:39:29.255545  122433 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:29.255978  122433 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:29.256213  122433 main.go:141] libmachine: (functional-381687) Calling .GetState
I0903 22:39:29.258107  122433 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:29.258154  122433 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:29.273173  122433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
I0903 22:39:29.273722  122433 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:29.274216  122433 main.go:141] libmachine: Using API Version  1
I0903 22:39:29.274245  122433 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:29.274670  122433 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:29.274900  122433 main.go:141] libmachine: (functional-381687) Calling .DriverName
I0903 22:39:29.275123  122433 ssh_runner.go:195] Run: systemctl --version
I0903 22:39:29.275160  122433 main.go:141] libmachine: (functional-381687) Calling .GetSSHHostname
I0903 22:39:29.278036  122433 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:29.278441  122433 main.go:141] libmachine: (functional-381687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:cc:f3", ip: ""} in network mk-functional-381687: {Iface:virbr1 ExpiryTime:2025-09-03 23:36:55 +0000 UTC Type:0 Mac:52:54:00:8a:cc:f3 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:functional-381687 Clientid:01:52:54:00:8a:cc:f3}
I0903 22:39:29.278472  122433 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined IP address 192.168.39.241 and MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:29.278611  122433 main.go:141] libmachine: (functional-381687) Calling .GetSSHPort
I0903 22:39:29.278788  122433 main.go:141] libmachine: (functional-381687) Calling .GetSSHKeyPath
I0903 22:39:29.278956  122433 main.go:141] libmachine: (functional-381687) Calling .GetSSHUsername
I0903 22:39:29.279130  122433 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/functional-381687/id_rsa Username:docker}
I0903 22:39:29.372442  122433 ssh_runner.go:195] Run: sudo crictl images --output json
I0903 22:39:29.431893  122433 main.go:141] libmachine: Making call to close driver server
I0903 22:39:29.431922  122433 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:29.432226  122433 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:29.432253  122433 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:29.432269  122433 main.go:141] libmachine: Making call to close driver server
I0903 22:39:29.432277  122433 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:29.432277  122433 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:29.432556  122433 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:29.432566  122433 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:29.432586  122433 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381687 image ls --format json --alsologtostderr:
[{"id":"30f69874076538b2feac46aa440b18459e9c83a6cebf064fc1a61744c09e7dec","repoDigests":["localhost/minikube-local-cache-test@sha256:040ef3a22e67ee17f98f61f971178851ef96a8f734e324a4327e29251c399dba"],"repoTags":["localhost/minikube-local-cache-test:functional-381687"],"size":"3326"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a08
6b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo
-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-381687"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d79
51b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-mi
nikube/storage-provisioner:v5"],"size":"31470524"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224","repoDigests":["docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57","docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7"],"repoTags":["docker.io/library/nginx:latest"],"size":"196544386"},{"id":"
52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca3
6e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381687 image ls --format json --alsologtostderr:
I0903 22:39:28.982078  122409 out.go:360] Setting OutFile to fd 1 ...
I0903 22:39:28.982324  122409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:28.982332  122409 out.go:374] Setting ErrFile to fd 2...
I0903 22:39:28.982337  122409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:28.982506  122409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
I0903 22:39:28.983034  122409 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:28.983132  122409 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:28.985205  122409 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:28.985277  122409 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:29.001531  122409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
I0903 22:39:29.002211  122409 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:29.002771  122409 main.go:141] libmachine: Using API Version  1
I0903 22:39:29.002796  122409 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:29.003238  122409 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:29.003428  122409 main.go:141] libmachine: (functional-381687) Calling .GetState
I0903 22:39:29.005275  122409 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:29.005333  122409 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:29.020279  122409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44389
I0903 22:39:29.020773  122409 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:29.021431  122409 main.go:141] libmachine: Using API Version  1
I0903 22:39:29.021485  122409 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:29.021819  122409 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:29.022013  122409 main.go:141] libmachine: (functional-381687) Calling .DriverName
I0903 22:39:29.022221  122409 ssh_runner.go:195] Run: systemctl --version
I0903 22:39:29.022247  122409 main.go:141] libmachine: (functional-381687) Calling .GetSSHHostname
I0903 22:39:29.024573  122409 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:29.024948  122409 main.go:141] libmachine: (functional-381687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:cc:f3", ip: ""} in network mk-functional-381687: {Iface:virbr1 ExpiryTime:2025-09-03 23:36:55 +0000 UTC Type:0 Mac:52:54:00:8a:cc:f3 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:functional-381687 Clientid:01:52:54:00:8a:cc:f3}
I0903 22:39:29.024969  122409 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined IP address 192.168.39.241 and MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:29.025138  122409 main.go:141] libmachine: (functional-381687) Calling .GetSSHPort
I0903 22:39:29.025318  122409 main.go:141] libmachine: (functional-381687) Calling .GetSSHKeyPath
I0903 22:39:29.025505  122409 main.go:141] libmachine: (functional-381687) Calling .GetSSHUsername
I0903 22:39:29.025645  122409 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/functional-381687/id_rsa Username:docker}
I0903 22:39:29.121092  122409 ssh_runner.go:195] Run: sudo crictl images --output json
I0903 22:39:29.176911  122409 main.go:141] libmachine: Making call to close driver server
I0903 22:39:29.176931  122409 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:29.177293  122409 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:29.177341  122409 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:29.177357  122409 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:29.177373  122409 main.go:141] libmachine: Making call to close driver server
I0903 22:39:29.177392  122409 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:29.177633  122409 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:29.177645  122409 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381687 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-381687
size: "4945146"
- id: ad5708199ec7d169c6837fe46e1646603d0f7d0a0f54d3cd8d07bc1c818d0224
repoDigests:
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
- docker.io/library/nginx@sha256:f15190cd0aed34df2541e6a569d349858dd83fe2a519d7c0ec023133b6d3c4f7
repoTags:
- docker.io/library/nginx:latest
size: "196544386"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 30f69874076538b2feac46aa440b18459e9c83a6cebf064fc1a61744c09e7dec
repoDigests:
- localhost/minikube-local-cache-test@sha256:040ef3a22e67ee17f98f61f971178851ef96a8f734e324a4327e29251c399dba
repoTags:
- localhost/minikube-local-cache-test:functional-381687
size: "3326"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381687 image ls --format yaml --alsologtostderr:
I0903 22:39:25.902225  122016 out.go:360] Setting OutFile to fd 1 ...
I0903 22:39:25.902499  122016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:25.902510  122016 out.go:374] Setting ErrFile to fd 2...
I0903 22:39:25.902514  122016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:25.902710  122016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
I0903 22:39:25.903249  122016 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:25.903352  122016 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:25.903691  122016 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:25.903758  122016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:25.920038  122016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
I0903 22:39:25.920529  122016 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:25.921137  122016 main.go:141] libmachine: Using API Version  1
I0903 22:39:25.921180  122016 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:25.921574  122016 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:25.921848  122016 main.go:141] libmachine: (functional-381687) Calling .GetState
I0903 22:39:25.923869  122016 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:25.923922  122016 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:25.938950  122016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
I0903 22:39:25.939456  122016 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:25.939967  122016 main.go:141] libmachine: Using API Version  1
I0903 22:39:25.939996  122016 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:25.940301  122016 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:25.940500  122016 main.go:141] libmachine: (functional-381687) Calling .DriverName
I0903 22:39:25.940689  122016 ssh_runner.go:195] Run: systemctl --version
I0903 22:39:25.940721  122016 main.go:141] libmachine: (functional-381687) Calling .GetSSHHostname
I0903 22:39:25.943701  122016 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:25.944101  122016 main.go:141] libmachine: (functional-381687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:cc:f3", ip: ""} in network mk-functional-381687: {Iface:virbr1 ExpiryTime:2025-09-03 23:36:55 +0000 UTC Type:0 Mac:52:54:00:8a:cc:f3 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:functional-381687 Clientid:01:52:54:00:8a:cc:f3}
I0903 22:39:25.944127  122016 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined IP address 192.168.39.241 and MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:25.944254  122016 main.go:141] libmachine: (functional-381687) Calling .GetSSHPort
I0903 22:39:25.944407  122016 main.go:141] libmachine: (functional-381687) Calling .GetSSHKeyPath
I0903 22:39:25.944566  122016 main.go:141] libmachine: (functional-381687) Calling .GetSSHUsername
I0903 22:39:25.944694  122016 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/functional-381687/id_rsa Username:docker}
I0903 22:39:26.045197  122016 ssh_runner.go:195] Run: sudo crictl images --output json
I0903 22:39:26.171094  122016 main.go:141] libmachine: Making call to close driver server
I0903 22:39:26.171112  122016 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:26.171503  122016 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:26.171565  122016 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:26.171573  122016 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:26.171581  122016 main.go:141] libmachine: Making call to close driver server
I0903 22:39:26.171597  122016 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:26.171857  122016 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:26.171885  122016 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:26.171896  122016 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh pgrep buildkitd
I0903 22:39:26.319861  113288 detect.go:223] nested VM detected
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh pgrep buildkitd: exit status 1 (272.694385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image build -t localhost/my-image:functional-381687 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 image build -t localhost/my-image:functional-381687 testdata/build --alsologtostderr: (8.704872628s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381687 image build -t localhost/my-image:functional-381687 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3e7ce2b8aa6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-381687
--> 2617c1615b3
Successfully tagged localhost/my-image:functional-381687
2617c1615b38f885cfe06e417f6dac46fe67bc6d7b82537ede993d5b59fe632a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381687 image build -t localhost/my-image:functional-381687 testdata/build --alsologtostderr:
I0903 22:39:26.499451  122121 out.go:360] Setting OutFile to fd 1 ...
I0903 22:39:26.499757  122121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:26.499767  122121 out.go:374] Setting ErrFile to fd 2...
I0903 22:39:26.499771  122121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 22:39:26.499981  122121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
I0903 22:39:26.500565  122121 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:26.501752  122121 config.go:182] Loaded profile config "functional-381687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 22:39:26.502585  122121 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:26.502634  122121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:26.518335  122121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
I0903 22:39:26.518910  122121 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:26.519515  122121 main.go:141] libmachine: Using API Version  1
I0903 22:39:26.519540  122121 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:26.519886  122121 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:26.520068  122121 main.go:141] libmachine: (functional-381687) Calling .GetState
I0903 22:39:26.521677  122121 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
I0903 22:39:26.521719  122121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0903 22:39:26.536640  122121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
I0903 22:39:26.537040  122121 main.go:141] libmachine: () Calling .GetVersion
I0903 22:39:26.537490  122121 main.go:141] libmachine: Using API Version  1
I0903 22:39:26.537516  122121 main.go:141] libmachine: () Calling .SetConfigRaw
I0903 22:39:26.537923  122121 main.go:141] libmachine: () Calling .GetMachineName
I0903 22:39:26.538123  122121 main.go:141] libmachine: (functional-381687) Calling .DriverName
I0903 22:39:26.538280  122121 ssh_runner.go:195] Run: systemctl --version
I0903 22:39:26.538306  122121 main.go:141] libmachine: (functional-381687) Calling .GetSSHHostname
I0903 22:39:26.541338  122121 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:26.541788  122121 main.go:141] libmachine: (functional-381687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:cc:f3", ip: ""} in network mk-functional-381687: {Iface:virbr1 ExpiryTime:2025-09-03 23:36:55 +0000 UTC Type:0 Mac:52:54:00:8a:cc:f3 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:functional-381687 Clientid:01:52:54:00:8a:cc:f3}
I0903 22:39:26.541826  122121 main.go:141] libmachine: (functional-381687) DBG | domain functional-381687 has defined IP address 192.168.39.241 and MAC address 52:54:00:8a:cc:f3 in network mk-functional-381687
I0903 22:39:26.541994  122121 main.go:141] libmachine: (functional-381687) Calling .GetSSHPort
I0903 22:39:26.542170  122121 main.go:141] libmachine: (functional-381687) Calling .GetSSHKeyPath
I0903 22:39:26.542326  122121 main.go:141] libmachine: (functional-381687) Calling .GetSSHUsername
I0903 22:39:26.542467  122121 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/functional-381687/id_rsa Username:docker}
I0903 22:39:26.649590  122121 build_images.go:161] Building image from path: /tmp/build.4121236860.tar
I0903 22:39:26.649657  122121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0903 22:39:26.671224  122121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4121236860.tar
I0903 22:39:26.688086  122121 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4121236860.tar: stat -c "%s %y" /var/lib/minikube/build/build.4121236860.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4121236860.tar': No such file or directory
I0903 22:39:26.688121  122121 ssh_runner.go:362] scp /tmp/build.4121236860.tar --> /var/lib/minikube/build/build.4121236860.tar (3072 bytes)
I0903 22:39:26.774852  122121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4121236860
I0903 22:39:26.789503  122121 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4121236860 -xf /var/lib/minikube/build/build.4121236860.tar
I0903 22:39:26.807539  122121 crio.go:315] Building image: /var/lib/minikube/build/build.4121236860
I0903 22:39:26.807685  122121 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-381687 /var/lib/minikube/build/build.4121236860 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0903 22:39:35.127873  122121 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-381687 /var/lib/minikube/build/build.4121236860 --cgroup-manager=cgroupfs: (8.320146233s)
I0903 22:39:35.127960  122121 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4121236860
I0903 22:39:35.140081  122121 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4121236860.tar
I0903 22:39:35.151639  122121 build_images.go:217] Built localhost/my-image:functional-381687 from /tmp/build.4121236860.tar
I0903 22:39:35.151684  122121 build_images.go:133] succeeded building to: functional-381687
I0903 22:39:35.151691  122121 build_images.go:134] failed building to: 
I0903 22:39:35.151723  122121 main.go:141] libmachine: Making call to close driver server
I0903 22:39:35.151742  122121 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:35.151992  122121 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:35.152018  122121 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:35.152019  122121 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
I0903 22:39:35.152029  122121 main.go:141] libmachine: Making call to close driver server
I0903 22:39:35.152038  122121 main.go:141] libmachine: (functional-381687) Calling .Close
I0903 22:39:35.152388  122121 main.go:141] libmachine: Successfully made call to close driver server
I0903 22:39:35.152404  122121 main.go:141] libmachine: Making call to close connection to plugin binary
I0903 22:39:35.152423  122121 main.go:141] libmachine: (functional-381687) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.704408715s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-381687
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image load --daemon kicbase/echo-server:functional-381687 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image load --daemon kicbase/echo-server:functional-381687 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-381687
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image load --daemon kicbase/echo-server:functional-381687 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-381687 image load --daemon kicbase/echo-server:functional-381687 --alsologtostderr: (2.040981s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
I0903 22:39:07.651368  113288 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image save kicbase/echo-server:functional-381687 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image rm kicbase/echo-server:functional-381687 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service list -o json
functional_test.go:1504: Took "453.291844ms" to run "out/minikube-linux-amd64 -p functional-381687 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.241:32665
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.241:32665
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-381687
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 image save --daemon kicbase/echo-server:functional-381687 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-381687
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdany-port3825415460/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756939150901018426" to /tmp/TestFunctionalparallelMountCmdany-port3825415460/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756939150901018426" to /tmp/TestFunctionalparallelMountCmdany-port3825415460/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756939150901018426" to /tmp/TestFunctionalparallelMountCmdany-port3825415460/001/test-1756939150901018426
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.958499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 22:39:11.136284  113288 retry.go:31] will retry after 356.88109ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  3 22:39 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  3 22:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  3 22:39 test-1756939150901018426
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh cat /mount-9p/test-1756939150901018426
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-381687 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ad9ec3f7-830e-4810-9b3c-fb5d4ee5b080] Pending
helpers_test.go:352: "busybox-mount" [ad9ec3f7-830e-4810-9b3c-fb5d4ee5b080] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ad9ec3f7-830e-4810-9b3c-fb5d4ee5b080] Running
helpers_test.go:352: "busybox-mount" [ad9ec3f7-830e-4810-9b3c-fb5d4ee5b080] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ad9ec3f7-830e-4810-9b3c-fb5d4ee5b080] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.004752961s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-381687 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdany-port3825415460/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "339.260704ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.009235ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "320.787689ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.017886ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdspecific-port4135376379/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.008979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 22:39:25.749072  113288 retry.go:31] will retry after 389.106148ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdspecific-port4135376379/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "sudo umount -f /mount-9p": exit status 1 (233.310866ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-381687 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdspecific-port4135376379/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T" /mount1: exit status 1 (295.27644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 22:39:27.580815  113288 retry.go:31] will retry after 677.230988ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381687 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-381687 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4191282824/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-381687
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-381687
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-381687
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (221.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0903 22:41:03.160893  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:41:30.875392  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m40.294162097s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (221.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 kubectl -- rollout status deployment/busybox: (4.351858537s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-4lrnw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-8bhbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-sh4r2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-4lrnw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-8bhbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-sh4r2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-4lrnw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-8bhbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-sh4r2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-4lrnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-4lrnw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-8bhbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-8bhbh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-sh4r2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 kubectl -- exec busybox-7b57f96db7-sh4r2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node add --alsologtostderr -v 5
E0903 22:43:59.139373  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.145820  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.157179  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.178556  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.219949  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.301215  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.462559  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:43:59.783972  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:44:00.425800  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:44:01.707414  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:44:04.268848  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:44:09.390171  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:44:19.632551  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 node add --alsologtostderr -v 5: (51.848480638s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-718270 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp testdata/cp-test.txt ha-718270:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1779470712/001/cp-test_ha-718270.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270:/home/docker/cp-test.txt ha-718270-m02:/home/docker/cp-test_ha-718270_ha-718270-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test_ha-718270_ha-718270-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270:/home/docker/cp-test.txt ha-718270-m03:/home/docker/cp-test_ha-718270_ha-718270-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test_ha-718270_ha-718270-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270:/home/docker/cp-test.txt ha-718270-m04:/home/docker/cp-test_ha-718270_ha-718270-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test_ha-718270_ha-718270-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp testdata/cp-test.txt ha-718270-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1779470712/001/cp-test_ha-718270-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m02:/home/docker/cp-test.txt ha-718270:/home/docker/cp-test_ha-718270-m02_ha-718270.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test_ha-718270-m02_ha-718270.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m02:/home/docker/cp-test.txt ha-718270-m03:/home/docker/cp-test_ha-718270-m02_ha-718270-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test_ha-718270-m02_ha-718270-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m02:/home/docker/cp-test.txt ha-718270-m04:/home/docker/cp-test_ha-718270-m02_ha-718270-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test_ha-718270-m02_ha-718270-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp testdata/cp-test.txt ha-718270-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1779470712/001/cp-test_ha-718270-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m03:/home/docker/cp-test.txt ha-718270:/home/docker/cp-test_ha-718270-m03_ha-718270.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test_ha-718270-m03_ha-718270.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m03:/home/docker/cp-test.txt ha-718270-m02:/home/docker/cp-test_ha-718270-m03_ha-718270-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test_ha-718270-m03_ha-718270-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m03:/home/docker/cp-test.txt ha-718270-m04:/home/docker/cp-test_ha-718270-m03_ha-718270-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test_ha-718270-m03_ha-718270-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp testdata/cp-test.txt ha-718270-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1779470712/001/cp-test_ha-718270-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m04:/home/docker/cp-test.txt ha-718270:/home/docker/cp-test_ha-718270-m04_ha-718270.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270 "sudo cat /home/docker/cp-test_ha-718270-m04_ha-718270.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m04:/home/docker/cp-test.txt ha-718270-m02:/home/docker/cp-test_ha-718270-m04_ha-718270-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m02 "sudo cat /home/docker/cp-test_ha-718270-m04_ha-718270-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 cp ha-718270-m04:/home/docker/cp-test.txt ha-718270-m03:/home/docker/cp-test_ha-718270-m04_ha-718270-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m04 "sudo cat /home/docker/cp-test.txt"
E0903 22:44:40.114455  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 ssh -n ha-718270-m03 "sudo cat /home/docker/cp-test_ha-718270-m04_ha-718270-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node stop m02 --alsologtostderr -v 5
E0903 22:45:21.075945  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:46:03.161373  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 node stop m02 --alsologtostderr -v 5: (1m30.783904187s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5: exit status 7 (685.601906ms)

                                                
                                                
-- stdout --
	ha-718270
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-718270-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718270-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-718270-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:46:11.337002  127202 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:46:11.337259  127202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:46:11.337269  127202 out.go:374] Setting ErrFile to fd 2...
	I0903 22:46:11.337274  127202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:46:11.337485  127202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:46:11.337688  127202 out.go:368] Setting JSON to false
	I0903 22:46:11.337725  127202 mustload.go:65] Loading cluster: ha-718270
	I0903 22:46:11.337844  127202 notify.go:220] Checking for updates...
	I0903 22:46:11.338200  127202 config.go:182] Loaded profile config "ha-718270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:46:11.338234  127202 status.go:174] checking status of ha-718270 ...
	I0903 22:46:11.338781  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.338828  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.354875  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0903 22:46:11.355408  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.356054  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.356083  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.356482  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.356682  127202 main.go:141] libmachine: (ha-718270) Calling .GetState
	I0903 22:46:11.358329  127202 status.go:371] ha-718270 host status = "Running" (err=<nil>)
	I0903 22:46:11.358345  127202 host.go:66] Checking if "ha-718270" exists ...
	I0903 22:46:11.358608  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.358648  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.373409  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0903 22:46:11.373875  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.374356  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.374380  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.374720  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.374903  127202 main.go:141] libmachine: (ha-718270) Calling .GetIP
	I0903 22:46:11.377820  127202 main.go:141] libmachine: (ha-718270) DBG | domain ha-718270 has defined MAC address 52:54:00:76:65:b2 in network mk-ha-718270
	I0903 22:46:11.378276  127202 main.go:141] libmachine: (ha-718270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:65:b2", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:39:59 +0000 UTC Type:0 Mac:52:54:00:76:65:b2 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-718270 Clientid:01:52:54:00:76:65:b2}
	I0903 22:46:11.378312  127202 main.go:141] libmachine: (ha-718270) DBG | domain ha-718270 has defined IP address 192.168.39.103 and MAC address 52:54:00:76:65:b2 in network mk-ha-718270
	I0903 22:46:11.378479  127202 host.go:66] Checking if "ha-718270" exists ...
	I0903 22:46:11.378902  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.378952  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.394178  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I0903 22:46:11.394618  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.395385  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.395585  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.396042  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.396821  127202 main.go:141] libmachine: (ha-718270) Calling .DriverName
	I0903 22:46:11.397089  127202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 22:46:11.397122  127202 main.go:141] libmachine: (ha-718270) Calling .GetSSHHostname
	I0903 22:46:11.400214  127202 main.go:141] libmachine: (ha-718270) DBG | domain ha-718270 has defined MAC address 52:54:00:76:65:b2 in network mk-ha-718270
	I0903 22:46:11.400668  127202 main.go:141] libmachine: (ha-718270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:65:b2", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:39:59 +0000 UTC Type:0 Mac:52:54:00:76:65:b2 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-718270 Clientid:01:52:54:00:76:65:b2}
	I0903 22:46:11.400695  127202 main.go:141] libmachine: (ha-718270) DBG | domain ha-718270 has defined IP address 192.168.39.103 and MAC address 52:54:00:76:65:b2 in network mk-ha-718270
	I0903 22:46:11.400835  127202 main.go:141] libmachine: (ha-718270) Calling .GetSSHPort
	I0903 22:46:11.401006  127202 main.go:141] libmachine: (ha-718270) Calling .GetSSHKeyPath
	I0903 22:46:11.401130  127202 main.go:141] libmachine: (ha-718270) Calling .GetSSHUsername
	I0903 22:46:11.401290  127202 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/ha-718270/id_rsa Username:docker}
	I0903 22:46:11.492048  127202 ssh_runner.go:195] Run: systemctl --version
	I0903 22:46:11.501141  127202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 22:46:11.520806  127202 kubeconfig.go:125] found "ha-718270" server: "https://192.168.39.254:8443"
	I0903 22:46:11.520842  127202 api_server.go:166] Checking apiserver status ...
	I0903 22:46:11.520885  127202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 22:46:11.540209  127202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W0903 22:46:11.555193  127202 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0903 22:46:11.555259  127202 ssh_runner.go:195] Run: ls
	I0903 22:46:11.561804  127202 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0903 22:46:11.568388  127202 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0903 22:46:11.568415  127202 status.go:463] ha-718270 apiserver status = Running (err=<nil>)
	I0903 22:46:11.568426  127202 status.go:176] ha-718270 status: &{Name:ha-718270 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 22:46:11.568444  127202 status.go:174] checking status of ha-718270-m02 ...
	I0903 22:46:11.568722  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.568768  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.585020  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0903 22:46:11.585547  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.586117  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.586142  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.586595  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.586773  127202 main.go:141] libmachine: (ha-718270-m02) Calling .GetState
	I0903 22:46:11.588472  127202 status.go:371] ha-718270-m02 host status = "Stopped" (err=<nil>)
	I0903 22:46:11.588488  127202 status.go:384] host is not running, skipping remaining checks
	I0903 22:46:11.588494  127202 status.go:176] ha-718270-m02 status: &{Name:ha-718270-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 22:46:11.588512  127202 status.go:174] checking status of ha-718270-m03 ...
	I0903 22:46:11.588779  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.588815  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.603958  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0903 22:46:11.604550  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.605163  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.605187  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.605634  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.605830  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetState
	I0903 22:46:11.607478  127202 status.go:371] ha-718270-m03 host status = "Running" (err=<nil>)
	I0903 22:46:11.607496  127202 host.go:66] Checking if "ha-718270-m03" exists ...
	I0903 22:46:11.607933  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.607986  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.623983  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I0903 22:46:11.624452  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.625006  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.625042  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.625373  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.625607  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetIP
	I0903 22:46:11.628305  127202 main.go:141] libmachine: (ha-718270-m03) DBG | domain ha-718270-m03 has defined MAC address 52:54:00:91:c8:de in network mk-ha-718270
	I0903 22:46:11.628737  127202 main.go:141] libmachine: (ha-718270-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:c8:de", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:42:12 +0000 UTC Type:0 Mac:52:54:00:91:c8:de Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-718270-m03 Clientid:01:52:54:00:91:c8:de}
	I0903 22:46:11.628782  127202 main.go:141] libmachine: (ha-718270-m03) DBG | domain ha-718270-m03 has defined IP address 192.168.39.219 and MAC address 52:54:00:91:c8:de in network mk-ha-718270
	I0903 22:46:11.628829  127202 host.go:66] Checking if "ha-718270-m03" exists ...
	I0903 22:46:11.629115  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.629152  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.645980  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0903 22:46:11.646436  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.646881  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.646905  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.647228  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.647393  127202 main.go:141] libmachine: (ha-718270-m03) Calling .DriverName
	I0903 22:46:11.647584  127202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 22:46:11.647612  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetSSHHostname
	I0903 22:46:11.650292  127202 main.go:141] libmachine: (ha-718270-m03) DBG | domain ha-718270-m03 has defined MAC address 52:54:00:91:c8:de in network mk-ha-718270
	I0903 22:46:11.650782  127202 main.go:141] libmachine: (ha-718270-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:c8:de", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:42:12 +0000 UTC Type:0 Mac:52:54:00:91:c8:de Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-718270-m03 Clientid:01:52:54:00:91:c8:de}
	I0903 22:46:11.650811  127202 main.go:141] libmachine: (ha-718270-m03) DBG | domain ha-718270-m03 has defined IP address 192.168.39.219 and MAC address 52:54:00:91:c8:de in network mk-ha-718270
	I0903 22:46:11.650987  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetSSHPort
	I0903 22:46:11.651140  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetSSHKeyPath
	I0903 22:46:11.651272  127202 main.go:141] libmachine: (ha-718270-m03) Calling .GetSSHUsername
	I0903 22:46:11.651430  127202 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/ha-718270-m03/id_rsa Username:docker}
	I0903 22:46:11.746935  127202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 22:46:11.769072  127202 kubeconfig.go:125] found "ha-718270" server: "https://192.168.39.254:8443"
	I0903 22:46:11.769103  127202 api_server.go:166] Checking apiserver status ...
	I0903 22:46:11.769134  127202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 22:46:11.788853  127202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1835/cgroup
	W0903 22:46:11.803443  127202 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1835/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0903 22:46:11.803500  127202 ssh_runner.go:195] Run: ls
	I0903 22:46:11.808091  127202 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0903 22:46:11.812846  127202 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0903 22:46:11.812868  127202 status.go:463] ha-718270-m03 apiserver status = Running (err=<nil>)
	I0903 22:46:11.812880  127202 status.go:176] ha-718270-m03 status: &{Name:ha-718270-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 22:46:11.812901  127202 status.go:174] checking status of ha-718270-m04 ...
	I0903 22:46:11.813187  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.813241  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.829576  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0903 22:46:11.830018  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.830490  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.830514  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.830863  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.831086  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetState
	I0903 22:46:11.832810  127202 status.go:371] ha-718270-m04 host status = "Running" (err=<nil>)
	I0903 22:46:11.832837  127202 host.go:66] Checking if "ha-718270-m04" exists ...
	I0903 22:46:11.833242  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.833293  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.848554  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40727
	I0903 22:46:11.849055  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.849558  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.849589  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.849904  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.850073  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetIP
	I0903 22:46:11.852522  127202 main.go:141] libmachine: (ha-718270-m04) DBG | domain ha-718270-m04 has defined MAC address 52:54:00:ce:d6:cf in network mk-ha-718270
	I0903 22:46:11.852947  127202 main.go:141] libmachine: (ha-718270-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d6:cf", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:43:49 +0000 UTC Type:0 Mac:52:54:00:ce:d6:cf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-718270-m04 Clientid:01:52:54:00:ce:d6:cf}
	I0903 22:46:11.852972  127202 main.go:141] libmachine: (ha-718270-m04) DBG | domain ha-718270-m04 has defined IP address 192.168.39.239 and MAC address 52:54:00:ce:d6:cf in network mk-ha-718270
	I0903 22:46:11.853125  127202 host.go:66] Checking if "ha-718270-m04" exists ...
	I0903 22:46:11.853436  127202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:46:11.853484  127202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:46:11.868184  127202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I0903 22:46:11.868581  127202 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:46:11.869069  127202 main.go:141] libmachine: Using API Version  1
	I0903 22:46:11.869094  127202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:46:11.869423  127202 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:46:11.869619  127202 main.go:141] libmachine: (ha-718270-m04) Calling .DriverName
	I0903 22:46:11.869805  127202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 22:46:11.869829  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetSSHHostname
	I0903 22:46:11.872717  127202 main.go:141] libmachine: (ha-718270-m04) DBG | domain ha-718270-m04 has defined MAC address 52:54:00:ce:d6:cf in network mk-ha-718270
	I0903 22:46:11.873165  127202 main.go:141] libmachine: (ha-718270-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:d6:cf", ip: ""} in network mk-ha-718270: {Iface:virbr1 ExpiryTime:2025-09-03 23:43:49 +0000 UTC Type:0 Mac:52:54:00:ce:d6:cf Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-718270-m04 Clientid:01:52:54:00:ce:d6:cf}
	I0903 22:46:11.873190  127202 main.go:141] libmachine: (ha-718270-m04) DBG | domain ha-718270-m04 has defined IP address 192.168.39.239 and MAC address 52:54:00:ce:d6:cf in network mk-ha-718270
	I0903 22:46:11.873372  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetSSHPort
	I0903 22:46:11.873574  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetSSHKeyPath
	I0903 22:46:11.873741  127202 main.go:141] libmachine: (ha-718270-m04) Calling .GetSSHUsername
	I0903 22:46:11.873903  127202 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/ha-718270-m04/id_rsa Username:docker}
	I0903 22:46:11.957553  127202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 22:46:11.974135  127202 status.go:176] ha-718270-m04 status: &{Name:ha-718270-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node start m02 --alsologtostderr -v 5
E0903 22:46:42.997655  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 node start m02 --alsologtostderr -v 5: (32.326159278s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5: (1.203933725s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 stop --alsologtostderr -v 5
E0903 22:48:59.138971  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:49:26.839496  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:51:03.161204  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 stop --alsologtostderr -v 5: (4m34.885735843s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 start --wait true --alsologtostderr -v 5
E0903 22:52:26.239270  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 start --wait true --alsologtostderr -v 5: (2m15.407249301s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (410.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 node delete m03 --alsologtostderr -v 5: (17.831442421s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 stop --alsologtostderr -v 5
E0903 22:53:59.139625  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 22:56:03.161331  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 stop --alsologtostderr -v 5: (4m32.419984963s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5: exit status 7 (112.969264ms)

                                                
                                                
-- stdout --
	ha-718270
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718270-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-718270-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 22:58:29.469323  131235 out.go:360] Setting OutFile to fd 1 ...
	I0903 22:58:29.469628  131235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:58:29.469638  131235 out.go:374] Setting ErrFile to fd 2...
	I0903 22:58:29.469643  131235 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 22:58:29.469822  131235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 22:58:29.469996  131235 out.go:368] Setting JSON to false
	I0903 22:58:29.470034  131235 mustload.go:65] Loading cluster: ha-718270
	I0903 22:58:29.470119  131235 notify.go:220] Checking for updates...
	I0903 22:58:29.470379  131235 config.go:182] Loaded profile config "ha-718270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 22:58:29.470395  131235 status.go:174] checking status of ha-718270 ...
	I0903 22:58:29.470799  131235 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:58:29.470835  131235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:58:29.497097  131235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0903 22:58:29.497656  131235 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:58:29.498192  131235 main.go:141] libmachine: Using API Version  1
	I0903 22:58:29.498229  131235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:58:29.498665  131235 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:58:29.498833  131235 main.go:141] libmachine: (ha-718270) Calling .GetState
	I0903 22:58:29.500412  131235 status.go:371] ha-718270 host status = "Stopped" (err=<nil>)
	I0903 22:58:29.500430  131235 status.go:384] host is not running, skipping remaining checks
	I0903 22:58:29.500437  131235 status.go:176] ha-718270 status: &{Name:ha-718270 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 22:58:29.500460  131235 status.go:174] checking status of ha-718270-m02 ...
	I0903 22:58:29.500749  131235 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:58:29.500799  131235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:58:29.515492  131235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0903 22:58:29.515938  131235 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:58:29.516396  131235 main.go:141] libmachine: Using API Version  1
	I0903 22:58:29.516436  131235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:58:29.516773  131235 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:58:29.516947  131235 main.go:141] libmachine: (ha-718270-m02) Calling .GetState
	I0903 22:58:29.518578  131235 status.go:371] ha-718270-m02 host status = "Stopped" (err=<nil>)
	I0903 22:58:29.518606  131235 status.go:384] host is not running, skipping remaining checks
	I0903 22:58:29.518612  131235 status.go:176] ha-718270-m02 status: &{Name:ha-718270-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 22:58:29.518632  131235 status.go:174] checking status of ha-718270-m04 ...
	I0903 22:58:29.519011  131235 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 22:58:29.519057  131235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 22:58:29.533418  131235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38261
	I0903 22:58:29.533818  131235 main.go:141] libmachine: () Calling .GetVersion
	I0903 22:58:29.534259  131235 main.go:141] libmachine: Using API Version  1
	I0903 22:58:29.534281  131235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 22:58:29.534640  131235 main.go:141] libmachine: () Calling .GetMachineName
	I0903 22:58:29.534824  131235 main.go:141] libmachine: (ha-718270-m04) Calling .GetState
	I0903 22:58:29.536239  131235 status.go:371] ha-718270-m04 host status = "Stopped" (err=<nil>)
	I0903 22:58:29.536259  131235 status.go:384] host is not running, skipping remaining checks
	I0903 22:58:29.536266  131235 status.go:176] ha-718270-m04 status: &{Name:ha-718270-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0903 22:58:59.139120  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:00:22.201453  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m56.829820795s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 node add --control-plane --alsologtostderr -v 5
E0903 23:01:03.162631  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-718270 node add --control-plane --alsologtostderr -v 5: (1m22.007264151s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-718270 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-199399 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-199399 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.248356302s)
--- PASS: TestJSONOutput/start/Command (61.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-199399 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-199399 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-199399 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-199399 --output=json --user=testUser: (7.33101182s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-286945 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-286945 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (62.035997ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e301b5d-2f86-4f69-a6c6-37a5352f4482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-286945] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"03c5a7eb-f019-413a-a0c8-c074aa07ae82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21341"}}
	{"specversion":"1.0","id":"a9f4d5cb-ca5f-4484-8e8d-ff9623b3eb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e87077d-3fe3-47ca-b25c-101054cfaaed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig"}}
	{"specversion":"1.0","id":"7b0632a4-2ea6-418a-a325-a1c8a39253f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube"}}
	{"specversion":"1.0","id":"dbe5ae1b-d79a-45fb-9096-59bfd5984feb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dfa4f917-9ddf-4468-a457-a0b7ac78c4ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4dd8dc05-8c96-4919-8e85-e8d988737488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-286945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-286945
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-024620 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-024620 --driver=kvm2  --container-runtime=crio: (44.979047165s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-038984 --driver=kvm2  --container-runtime=crio
E0903 23:03:59.139893  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-038984 --driver=kvm2  --container-runtime=crio: (44.480251374s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-024620
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-038984
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-038984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-038984
helpers_test.go:175: Cleaning up "first-024620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-024620
--- PASS: TestMinikubeProfile (92.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-253745 --memory=3072 --mount-string /tmp/TestMountStartserial3355107073/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-253745 --memory=3072 --mount-string /tmp/TestMountStartserial3355107073/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.919299584s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253745 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253745 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-269253 --memory=3072 --mount-string /tmp/TestMountStartserial3355107073/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-269253 --memory=3072 --mount-string /tmp/TestMountStartserial3355107073/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.20220765s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-253745 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-269253
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-269253: (1.663960977s)
--- PASS: TestMountStart/serial/Stop (1.66s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.69s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-269253
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-269253: (22.693520192s)
--- PASS: TestMountStart/serial/RestartStopped (23.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269253 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.69391608s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-688539 -- rollout status deployment/busybox: (4.718229523s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-g2mbd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-mqw57 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-g2mbd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-mqw57 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-g2mbd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-mqw57 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-g2mbd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-g2mbd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-mqw57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-688539 -- exec busybox-7b57f96db7-mqw57 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-688539 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-688539 -v=5 --alsologtostderr: (49.833460167s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.42s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-688539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile349462103/001/cp-test_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt multinode-688539-m02:/home/docker/cp-test_multinode-688539_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test_multinode-688539_multinode-688539-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539:/home/docker/cp-test.txt multinode-688539-m03:/home/docker/cp-test_multinode-688539_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test_multinode-688539_multinode-688539-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile349462103/001/cp-test_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt multinode-688539:/home/docker/cp-test_multinode-688539-m02_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test_multinode-688539-m02_multinode-688539.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m02:/home/docker/cp-test.txt multinode-688539-m03:/home/docker/cp-test_multinode-688539-m02_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test_multinode-688539-m02_multinode-688539-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp testdata/cp-test.txt multinode-688539-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile349462103/001/cp-test_multinode-688539-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt multinode-688539:/home/docker/cp-test_multinode-688539-m03_multinode-688539.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539 "sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 cp multinode-688539-m03:/home/docker/cp-test.txt multinode-688539-m02:/home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m03 "sudo cat /home/docker/cp-test.txt"
E0903 23:08:59.139245  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 ssh -n multinode-688539-m02 "sudo cat /home/docker/cp-test_multinode-688539-m03_multinode-688539-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node stop m03: (1.596056069s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status: exit status 7 (473.083626ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr: exit status 7 (444.536874ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688539-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688539-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:09:01.455065  138982 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:09:01.455305  138982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:09:01.455313  138982 out.go:374] Setting ErrFile to fd 2...
	I0903 23:09:01.455317  138982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:09:01.455540  138982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:09:01.455703  138982 out.go:368] Setting JSON to false
	I0903 23:09:01.455743  138982 mustload.go:65] Loading cluster: multinode-688539
	I0903 23:09:01.455851  138982 notify.go:220] Checking for updates...
	I0903 23:09:01.456246  138982 config.go:182] Loaded profile config "multinode-688539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:09:01.456271  138982 status.go:174] checking status of multinode-688539 ...
	I0903 23:09:01.456783  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.456826  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.473055  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39645
	I0903 23:09:01.473573  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.474056  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.474081  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.474455  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.474685  138982 main.go:141] libmachine: (multinode-688539) Calling .GetState
	I0903 23:09:01.476468  138982 status.go:371] multinode-688539 host status = "Running" (err=<nil>)
	I0903 23:09:01.476495  138982 host.go:66] Checking if "multinode-688539" exists ...
	I0903 23:09:01.476783  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.476828  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.492581  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I0903 23:09:01.493061  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.493543  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.493563  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.493896  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.494086  138982 main.go:141] libmachine: (multinode-688539) Calling .GetIP
	I0903 23:09:01.496553  138982 main.go:141] libmachine: (multinode-688539) DBG | domain multinode-688539 has defined MAC address 52:54:00:89:e0:11 in network mk-multinode-688539
	I0903 23:09:01.496947  138982 main.go:141] libmachine: (multinode-688539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e0:11", ip: ""} in network mk-multinode-688539: {Iface:virbr1 ExpiryTime:2025-09-04 00:06:18 +0000 UTC Type:0 Mac:52:54:00:89:e0:11 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-688539 Clientid:01:52:54:00:89:e0:11}
	I0903 23:09:01.496973  138982 main.go:141] libmachine: (multinode-688539) DBG | domain multinode-688539 has defined IP address 192.168.39.146 and MAC address 52:54:00:89:e0:11 in network mk-multinode-688539
	I0903 23:09:01.497076  138982 host.go:66] Checking if "multinode-688539" exists ...
	I0903 23:09:01.497409  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.497461  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.512821  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0903 23:09:01.513162  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.513576  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.513597  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.513992  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.514223  138982 main.go:141] libmachine: (multinode-688539) Calling .DriverName
	I0903 23:09:01.514423  138982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:09:01.514454  138982 main.go:141] libmachine: (multinode-688539) Calling .GetSSHHostname
	I0903 23:09:01.517445  138982 main.go:141] libmachine: (multinode-688539) DBG | domain multinode-688539 has defined MAC address 52:54:00:89:e0:11 in network mk-multinode-688539
	I0903 23:09:01.517848  138982 main.go:141] libmachine: (multinode-688539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:e0:11", ip: ""} in network mk-multinode-688539: {Iface:virbr1 ExpiryTime:2025-09-04 00:06:18 +0000 UTC Type:0 Mac:52:54:00:89:e0:11 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-688539 Clientid:01:52:54:00:89:e0:11}
	I0903 23:09:01.517881  138982 main.go:141] libmachine: (multinode-688539) DBG | domain multinode-688539 has defined IP address 192.168.39.146 and MAC address 52:54:00:89:e0:11 in network mk-multinode-688539
	I0903 23:09:01.518007  138982 main.go:141] libmachine: (multinode-688539) Calling .GetSSHPort
	I0903 23:09:01.518187  138982 main.go:141] libmachine: (multinode-688539) Calling .GetSSHKeyPath
	I0903 23:09:01.518356  138982 main.go:141] libmachine: (multinode-688539) Calling .GetSSHUsername
	I0903 23:09:01.518526  138982 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/multinode-688539/id_rsa Username:docker}
	I0903 23:09:01.604851  138982 ssh_runner.go:195] Run: systemctl --version
	I0903 23:09:01.610659  138982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:09:01.625881  138982 kubeconfig.go:125] found "multinode-688539" server: "https://192.168.39.146:8443"
	I0903 23:09:01.625912  138982 api_server.go:166] Checking apiserver status ...
	I0903 23:09:01.625943  138982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:09:01.643219  138982 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	W0903 23:09:01.655681  138982 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0903 23:09:01.655743  138982 ssh_runner.go:195] Run: ls
	I0903 23:09:01.660970  138982 api_server.go:253] Checking apiserver healthz at https://192.168.39.146:8443/healthz ...
	I0903 23:09:01.665550  138982 api_server.go:279] https://192.168.39.146:8443/healthz returned 200:
	ok
	I0903 23:09:01.665578  138982 status.go:463] multinode-688539 apiserver status = Running (err=<nil>)
	I0903 23:09:01.665588  138982 status.go:176] multinode-688539 status: &{Name:multinode-688539 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:09:01.665605  138982 status.go:174] checking status of multinode-688539-m02 ...
	I0903 23:09:01.665937  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.665989  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.681414  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0903 23:09:01.681829  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.682224  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.682245  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.682654  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.682915  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetState
	I0903 23:09:01.684545  138982 status.go:371] multinode-688539-m02 host status = "Running" (err=<nil>)
	I0903 23:09:01.684566  138982 host.go:66] Checking if "multinode-688539-m02" exists ...
	I0903 23:09:01.684976  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.685026  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.700346  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0903 23:09:01.702600  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.703155  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.703185  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.703610  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.703788  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetIP
	I0903 23:09:01.706528  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | domain multinode-688539-m02 has defined MAC address 52:54:00:d7:bc:c3 in network mk-multinode-688539
	I0903 23:09:01.706912  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:bc:c3", ip: ""} in network mk-multinode-688539: {Iface:virbr1 ExpiryTime:2025-09-04 00:07:17 +0000 UTC Type:0 Mac:52:54:00:d7:bc:c3 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-688539-m02 Clientid:01:52:54:00:d7:bc:c3}
	I0903 23:09:01.706947  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | domain multinode-688539-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:d7:bc:c3 in network mk-multinode-688539
	I0903 23:09:01.707051  138982 host.go:66] Checking if "multinode-688539-m02" exists ...
	I0903 23:09:01.707348  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.707385  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.722358  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0903 23:09:01.722750  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.723154  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.723176  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.723531  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.723705  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .DriverName
	I0903 23:09:01.723884  138982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:09:01.723905  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetSSHHostname
	I0903 23:09:01.726755  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | domain multinode-688539-m02 has defined MAC address 52:54:00:d7:bc:c3 in network mk-multinode-688539
	I0903 23:09:01.727212  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:bc:c3", ip: ""} in network mk-multinode-688539: {Iface:virbr1 ExpiryTime:2025-09-04 00:07:17 +0000 UTC Type:0 Mac:52:54:00:d7:bc:c3 Iaid: IPaddr:192.168.39.126 Prefix:24 Hostname:multinode-688539-m02 Clientid:01:52:54:00:d7:bc:c3}
	I0903 23:09:01.727242  138982 main.go:141] libmachine: (multinode-688539-m02) DBG | domain multinode-688539-m02 has defined IP address 192.168.39.126 and MAC address 52:54:00:d7:bc:c3 in network mk-multinode-688539
	I0903 23:09:01.727369  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetSSHPort
	I0903 23:09:01.727559  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetSSHKeyPath
	I0903 23:09:01.727715  138982 main.go:141] libmachine: (multinode-688539-m02) Calling .GetSSHUsername
	I0903 23:09:01.727870  138982 sshutil.go:53] new ssh client: &{IP:192.168.39.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21341-109162/.minikube/machines/multinode-688539-m02/id_rsa Username:docker}
	I0903 23:09:01.810042  138982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:09:01.831948  138982 status.go:176] multinode-688539-m02 status: &{Name:multinode-688539-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:09:01.832001  138982 status.go:174] checking status of multinode-688539-m03 ...
	I0903 23:09:01.832356  138982 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:09:01.832402  138982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:09:01.848293  138982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0903 23:09:01.848778  138982 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:09:01.849250  138982 main.go:141] libmachine: Using API Version  1
	I0903 23:09:01.849273  138982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:09:01.849733  138982 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:09:01.849954  138982 main.go:141] libmachine: (multinode-688539-m03) Calling .GetState
	I0903 23:09:01.851746  138982 status.go:371] multinode-688539-m03 host status = "Stopped" (err=<nil>)
	I0903 23:09:01.851761  138982 status.go:384] host is not running, skipping remaining checks
	I0903 23:09:01.851780  138982 status.go:176] multinode-688539-m03 status: &{Name:multinode-688539-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node start m03 -v=5 --alsologtostderr
E0903 23:09:06.242424  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node start m03 -v=5 --alsologtostderr: (38.32261084s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-688539
E0903 23:11:03.160625  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-688539: (3m3.197126441s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr
E0903 23:13:59.139239  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr: (2m24.661803074s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 node delete m03: (2.219676648s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 stop
E0903 23:16:03.164010  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:17:02.205560  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-688539 stop: (3m1.910811809s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status: exit status 7 (88.645828ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr: exit status 7 (83.33261ms)

                                                
                                                
-- stdout --
	multinode-688539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688539-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:18:13.605283  142397 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:18:13.605580  142397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:18:13.605591  142397 out.go:374] Setting ErrFile to fd 2...
	I0903 23:18:13.605597  142397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:18:13.605808  142397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:18:13.606000  142397 out.go:368] Setting JSON to false
	I0903 23:18:13.606044  142397 mustload.go:65] Loading cluster: multinode-688539
	I0903 23:18:13.606151  142397 notify.go:220] Checking for updates...
	I0903 23:18:13.606470  142397 config.go:182] Loaded profile config "multinode-688539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:18:13.606496  142397 status.go:174] checking status of multinode-688539 ...
	I0903 23:18:13.606899  142397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:18:13.606947  142397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:18:13.621966  142397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35261
	I0903 23:18:13.622432  142397 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:18:13.622997  142397 main.go:141] libmachine: Using API Version  1
	I0903 23:18:13.623024  142397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:18:13.623359  142397 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:18:13.623536  142397 main.go:141] libmachine: (multinode-688539) Calling .GetState
	I0903 23:18:13.625071  142397 status.go:371] multinode-688539 host status = "Stopped" (err=<nil>)
	I0903 23:18:13.625083  142397 status.go:384] host is not running, skipping remaining checks
	I0903 23:18:13.625090  142397 status.go:176] multinode-688539 status: &{Name:multinode-688539 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:18:13.625127  142397 status.go:174] checking status of multinode-688539-m02 ...
	I0903 23:18:13.625464  142397 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/21341-109162/.minikube/bin/docker-machine-driver-kvm2
	I0903 23:18:13.625499  142397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0903 23:18:13.640218  142397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45977
	I0903 23:18:13.640660  142397 main.go:141] libmachine: () Calling .GetVersion
	I0903 23:18:13.641072  142397 main.go:141] libmachine: Using API Version  1
	I0903 23:18:13.641100  142397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0903 23:18:13.641428  142397 main.go:141] libmachine: () Calling .GetMachineName
	I0903 23:18:13.641606  142397 main.go:141] libmachine: (multinode-688539-m02) Calling .GetState
	I0903 23:18:13.643043  142397 status.go:371] multinode-688539-m02 host status = "Stopped" (err=<nil>)
	I0903 23:18:13.643057  142397 status.go:384] host is not running, skipping remaining checks
	I0903 23:18:13.643064  142397 status.go:176] multinode-688539-m02 status: &{Name:multinode-688539-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (102.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0903 23:18:59.140294  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.76856762s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-688539 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (102.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688539
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-688539-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.459801ms)

                                                
                                                
-- stdout --
	* [multinode-688539-m02] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-688539-m02' is duplicated with machine name 'multinode-688539-m02' in profile 'multinode-688539'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688539-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-688539-m03 --driver=kvm2  --container-runtime=crio: (46.213146534s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-688539
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-688539: exit status 80 (229.202284ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-688539 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-688539-m03 already exists in multinode-688539-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-688539-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.35s)

                                                
                                    
x
+
TestScheduledStopUnix (117.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-796164 --memory=3072 --driver=kvm2  --container-runtime=crio
E0903 23:23:59.143971  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-796164 --memory=3072 --driver=kvm2  --container-runtime=crio: (46.165983505s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796164 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-796164 -n scheduled-stop-796164
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0903 23:24:24.004945  113288 retry.go:31] will retry after 121.74µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.006115  113288 retry.go:31] will retry after 166.876µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.007255  113288 retry.go:31] will retry after 156.67µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.008378  113288 retry.go:31] will retry after 453.111µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.009493  113288 retry.go:31] will retry after 259.666µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.010645  113288 retry.go:31] will retry after 769.659µs: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.011777  113288 retry.go:31] will retry after 1.121582ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.014011  113288 retry.go:31] will retry after 1.258412ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.016199  113288 retry.go:31] will retry after 2.261768ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.019432  113288 retry.go:31] will retry after 3.770097ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.023653  113288 retry.go:31] will retry after 5.850318ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.029973  113288 retry.go:31] will retry after 12.320224ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.043249  113288 retry.go:31] will retry after 10.347209ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.054468  113288 retry.go:31] will retry after 16.407201ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
I0903 23:24:24.071693  113288 retry.go:31] will retry after 22.22598ms: open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/scheduled-stop-796164/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796164 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796164 -n scheduled-stop-796164
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-796164
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-796164
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-796164: exit status 7 (76.779262ms)

                                                
                                                
-- stdout --
	scheduled-stop-796164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796164 -n scheduled-stop-796164
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796164 -n scheduled-stop-796164: exit status 7 (65.572879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-796164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-796164
--- PASS: TestScheduledStopUnix (117.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (153.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0903 23:25:46.243749  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.307044565 start -p running-upgrade-210842 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E0903 23:26:03.160990  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.307044565 start -p running-upgrade-210842 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m48.666500928s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-210842 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-210842 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.495864461s)
helpers_test.go:175: Cleaning up "running-upgrade-210842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-210842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-210842: (1.160517161s)
--- PASS: TestRunningBinaryUpgrade (153.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (237.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1908845442 start -p stopped-upgrade-924805 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1908845442 start -p stopped-upgrade-924805 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m38.338013045s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1908845442 -p stopped-upgrade-924805 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1908845442 -p stopped-upgrade-924805 stop: (1m31.797228061s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-924805 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-924805 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.277933803s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (237.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-380966 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-380966 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.943682ms)

                                                
                                                
-- stdout --
	* [false-380966] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:25:38.402566  146717 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:25:38.402834  146717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:38.402846  146717 out.go:374] Setting ErrFile to fd 2...
	I0903 23:25:38.402850  146717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:38.403092  146717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-109162/.minikube/bin
	I0903 23:25:38.403798  146717 out.go:368] Setting JSON to false
	I0903 23:25:38.404782  146717 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7682,"bootTime":1756934256,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1083-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0903 23:25:38.404869  146717 start.go:140] virtualization: kvm guest
	I0903 23:25:38.406530  146717 out.go:179] * [false-380966] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	I0903 23:25:38.407650  146717 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:25:38.407681  146717 notify.go:220] Checking for updates...
	I0903 23:25:38.409750  146717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:25:38.410905  146717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	I0903 23:25:38.412057  146717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	I0903 23:25:38.414131  146717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0903 23:25:38.415340  146717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:25:38.416845  146717 config.go:182] Loaded profile config "kubernetes-upgrade-938492": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0903 23:25:38.416934  146717 config.go:182] Loaded profile config "offline-crio-911470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:25:38.417028  146717 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:25:38.454253  146717 out.go:179] * Using the kvm2 driver based on user configuration
	I0903 23:25:38.455394  146717 start.go:304] selected driver: kvm2
	I0903 23:25:38.455415  146717 start.go:918] validating driver "kvm2" against <nil>
	I0903 23:25:38.455432  146717 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:25:38.457430  146717 out.go:203] 
	W0903 23:25:38.458450  146717 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0903 23:25:38.459412  146717 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-380966 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-380966

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-380966"

                                                
                                                
----------------------- debugLogs end: false-380966 [took: 2.914499001s] --------------------------------
helpers_test.go:175: Cleaning up "false-380966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-380966
--- PASS: TestNetworkPlugins/group/false (3.18s)

                                                
                                    
x
+
TestPause/serial/Start (75.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-957460 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-957460 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m15.272682684s)
--- PASS: TestPause/serial/Start (75.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-924805
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.818728ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-561956] minikube v1.36.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-109162/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-109162/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561956 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.361101851s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561956 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.634467249s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561956 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-561956 status -o json: exit status 2 (277.113144ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-561956","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-561956
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-561956: (1.051886446s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0903 23:31:03.161657  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561956 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.750229249s)
--- PASS: TestNoKubernetes/serial/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.588992466s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561956 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561956 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.298378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-561956
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-561956: (1.331706113s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561956 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561956 --driver=kvm2  --container-runtime=crio: (47.067315857s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.747781441s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561956 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561956 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.77164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.759951973s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-380966 "pgrep -a kubelet"
I0903 23:32:46.098109  113288 config.go:182] Loaded profile config "auto-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g7gft" [200b986b-3cb5-4e35-a1f1-c42005c117d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g7gft" [200b986b-3cb5-4e35-a1f1-c42005c117d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004636611s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m36.250609273s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wppxj" [89300143-2588-4912-b38d-b229d8c1c7bb] Running
E0903 23:33:42.207966  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00668864s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-380966 "pgrep -a kubelet"
I0903 23:33:44.055967  113288 config.go:182] Loaded profile config "kindnet-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ph7zq" [ce427e71-eb21-42e7-ba83-0b2525412a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ph7zq" [ce427e71-eb21-42e7-ba83-0b2525412a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004254886s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m15.74237189s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-tfb8v" [62e2427a-52ff-444f-ae45-1afb978abab6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004748969s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.110386026s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-380966 "pgrep -a kubelet"
I0903 23:34:19.051331  113288 config.go:182] Loaded profile config "calico-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z2v8w" [593bbb49-f376-44b8-b168-22b5720bbc86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z2v8w" [593bbb49-f376-44b8-b168-22b5720bbc86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006764607s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-380966 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m10.545434627s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-380966 "pgrep -a kubelet"
I0903 23:34:49.375965  113288 config.go:182] Loaded profile config "custom-flannel-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fjcxg" [9c10b6d9-83d8-4145-9f81-758149ba8ce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fjcxg" [9c10b6d9-83d8-4145-9f81-758149ba8ce3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003607235s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-380966 "pgrep -a kubelet"
I0903 23:35:06.028063  113288 config.go:182] Loaded profile config "bridge-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7jrnw" [1241910c-b86d-4777-a0a1-f589eff82686] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7jrnw" [1241910c-b86d-4777-a0a1-f589eff82686] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00445263s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m26.143360521s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-xrfsv" [7d7f8f5e-aff5-4344-97df-610dcda0255e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004337114s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-380966 "pgrep -a kubelet"
I0903 23:35:50.397117  113288 config.go:182] Loaded profile config "flannel-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lxsgg" [1bbe2d9f-32b3-4479-8a4b-ade92d0b9ed9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lxsgg" [1bbe2d9f-32b3-4479-8a4b-ade92d0b9ed9] Running
E0903 23:36:03.160568  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.003735828s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-380966 "pgrep -a kubelet"
I0903 23:35:57.959691  113288 config.go:182] Loaded profile config "enable-default-cni-380966": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-380966 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b6xtb" [4e673dc0-0301-4dc7-a456-1c33f1765a99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b6xtb" [4e673dc0-0301-4dc7-a456-1c33f1765a99] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004107649s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-380966 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-380966 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (60.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m0.66321606s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (60.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (1m22.237300482s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-434043 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ab669d5-20c3-4a86-a8d0-4951ecc407af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ab669d5-20c3-4a86-a8d0-4951ecc407af] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005577015s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-434043 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-434043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-434043 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043686085s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-434043 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-434043 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-434043 --alsologtostderr -v=3: (1m30.85455444s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-088493 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7606c722-5412-4996-af94-208816b8ae72] Pending
helpers_test.go:352: "busybox" [7606c722-5412-4996-af94-208816b8ae72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7606c722-5412-4996-af94-208816b8ae72] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003931053s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-088493 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-088493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-088493 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-088493 --alsologtostderr -v=3
E0903 23:37:46.319815  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.326321  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.337667  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.359865  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.401297  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.482748  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.644316  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:46.966560  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:37:47.608470  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-088493 --alsologtostderr -v=3: (1m31.335961617s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-799704 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c5a77cdc-b96a-414e-b77c-7761decc6968] Pending
E0903 23:37:48.889845  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [c5a77cdc-b96a-414e-b77c-7761decc6968] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0903 23:37:51.451210  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [c5a77cdc-b96a-414e-b77c-7761decc6968] Running
E0903 23:37:56.573458  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003832687s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-799704 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-799704 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-799704 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-799704 --alsologtostderr -v=3
E0903 23:38:06.815662  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:27.297544  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.764252  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.770693  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.782079  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.803468  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.844860  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:37.926269  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:38.087831  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:38.410089  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:39.052304  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:40.334663  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:42.895947  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-799704 --alsologtostderr -v=3: (1m31.397163058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-434043 -n no-preload-434043
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-434043 -n no-preload-434043: exit status 7 (76.148437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-434043 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0903 23:38:48.018307  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:58.260488  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:38:59.139134  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/functional-381687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-434043 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (55.879314339s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-434043 -n no-preload-434043
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-088493 -n embed-certs-088493
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-088493 -n embed-certs-088493: exit status 7 (76.333919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-088493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0903 23:39:08.259592  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:12.839565  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:12.845993  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:12.857444  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:12.878882  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:12.920374  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:13.001870  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:13.163986  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:13.485729  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:14.127933  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:15.409617  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:17.970940  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:18.742666  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:23.092650  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-088493 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (50.174370895s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-088493 -n embed-certs-088493
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704: exit status 7 (97.218438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799704 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0903 23:39:33.334847  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799704 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (51.039852525s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-frwrg" [e00e2f63-333a-4223-b62a-9ef56fc3eb09] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-frwrg" [e00e2f63-333a-4223-b62a-9ef56fc3eb09] Running
E0903 23:39:49.580791  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.587208  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.598539  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.620061  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.662356  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.744343  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:49.905696  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:50.227756  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:50.869121  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:52.150918  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:39:53.816736  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005861264s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-frwrg" [e00e2f63-333a-4223-b62a-9ef56fc3eb09] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.705851689s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-434043 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ghml" [32ac1f6f-06bd-48cb-971b-afec4be7f272] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ghml" [32ac1f6f-06bd-48cb-971b-afec4be7f272] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004060607s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-434043 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-434043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-434043 --alsologtostderr -v=1: (1.300610589s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-434043 -n no-preload-434043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-434043 -n no-preload-434043: exit status 2 (326.884856ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-434043 -n no-preload-434043
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-434043 -n no-preload-434043: exit status 2 (314.179377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-434043 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-434043 -n no-preload-434043
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-434043 -n no-preload-434043
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0903 23:40:06.323637  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.329981  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.341380  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.362794  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.404236  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.485760  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.647365  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:06.969485  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (50.329540005s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ghml" [32ac1f6f-06bd-48cb-971b-afec4be7f272] Running
E0903 23:40:07.611344  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:08.893556  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:10.077436  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:11.455493  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006143084s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-088493 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-088493 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-088493 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-088493 -n embed-certs-088493
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-088493 -n embed-certs-088493: exit status 2 (260.729563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-088493 -n embed-certs-088493
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-088493 -n embed-certs-088493: exit status 2 (263.855091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-088493 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-088493 -n embed-certs-088493
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-088493 -n embed-certs-088493
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zcl" [f43ff930-564f-45c8-b87e-a269e77b88f9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0903 23:40:26.818330  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zcl" [f43ff930-564f-45c8-b87e-a269e77b88f9] Running
E0903 23:40:30.181507  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/auto-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:30.559053  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004319873s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zcl" [f43ff930-564f-45c8-b87e-a269e77b88f9] Running
E0903 23:40:34.778668  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004057137s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-799704 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799704 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-799704 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704: exit status 2 (250.595305ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704: exit status 2 (273.360264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-799704 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-799704 -n default-k8s-diff-port-799704
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-959437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-959437 --alsologtostderr -v=3
E0903 23:40:58.234217  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.240616  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.252004  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.273468  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.314939  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.396533  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.558215  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:58.879937  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:40:59.521857  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:00.803411  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:03.161065  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/addons-389176/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:03.365684  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:04.634775  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-959437 --alsologtostderr -v=3: (10.335555003s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-959437 -n newest-cni-959437
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-959437 -n newest-cni-959437: exit status 7 (66.309473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-959437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0
E0903 23:41:08.487610  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:11.521247  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/custom-flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:18.729988  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:21.627077  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/kindnet-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:25.116233  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/flannel-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:28.262800  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/bridge-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:41:39.211836  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/enable-default-cni-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-959437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.0: (36.318856704s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-959437 -n newest-cni-959437
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-959437 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-959437 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-959437 -n newest-cni-959437
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-959437 -n newest-cni-959437: exit status 2 (251.738734ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-959437 -n newest-cni-959437
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-959437 -n newest-cni-959437: exit status 2 (245.123028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-959437 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-959437 -n newest-cni-959437
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-959437 -n newest-cni-959437
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-335468 --alsologtostderr -v=3
E0903 23:41:56.700194  113288 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-109162/.minikube/profiles/calico-380966/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-335468 --alsologtostderr -v=3: (5.303975313s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-335468 -n old-k8s-version-335468: exit status 7 (66.71048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-335468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    

Test skip (40/322)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.07
266 TestNetworkPlugins/group/cilium 3.32
272 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-389176 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-380966 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-380966

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-380966"

                                                
                                                
----------------------- debugLogs end: kubenet-380966 [took: 2.907455598s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-380966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-380966
--- SKIP: TestNetworkPlugins/group/kubenet (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-380966 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-380966" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-380966

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-380966" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-380966"

                                                
                                                
----------------------- debugLogs end: cilium-380966 [took: 3.174584551s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-380966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-380966
--- SKIP: TestNetworkPlugins/group/cilium (3.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-005091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-005091
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard